2026-02-16 02:23:58.166304 | Job console starting 2026-02-16 02:23:58.180664 | Updating git repos 2026-02-16 02:23:58.243787 | Cloning repos into workspace 2026-02-16 02:23:58.502389 | Restoring repo states 2026-02-16 02:23:58.533920 | Merging changes 2026-02-16 02:23:58.533958 | Checking out repos 2026-02-16 02:23:58.792896 | Preparing playbooks 2026-02-16 02:23:59.475243 | Running Ansible setup 2026-02-16 02:24:03.968950 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-16 02:24:04.713586 | 2026-02-16 02:24:04.713737 | PLAY [Base pre] 2026-02-16 02:24:04.732189 | 2026-02-16 02:24:04.732330 | TASK [Setup log path fact] 2026-02-16 02:24:04.765629 | orchestrator | ok 2026-02-16 02:24:04.783288 | 2026-02-16 02:24:04.783418 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-16 02:24:04.830902 | orchestrator | ok 2026-02-16 02:24:04.846387 | 2026-02-16 02:24:04.846499 | TASK [emit-job-header : Print job information] 2026-02-16 02:24:04.904310 | # Job Information 2026-02-16 02:24:04.904562 | Ansible Version: 2.16.14 2026-02-16 02:24:04.904620 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-02-16 02:24:04.904677 | Pipeline: periodic-midnight 2026-02-16 02:24:04.904718 | Executor: 521e9411259a 2026-02-16 02:24:04.904754 | Triggered by: https://github.com/osism/testbed 2026-02-16 02:24:04.904792 | Event ID: d99937c864204dcdb107066d5b4daadc 2026-02-16 02:24:04.914633 | 2026-02-16 02:24:04.914765 | LOOP [emit-job-header : Print node information] 2026-02-16 02:24:05.038879 | orchestrator | ok: 2026-02-16 02:24:05.039173 | orchestrator | # Node Information 2026-02-16 02:24:05.039211 | orchestrator | Inventory Hostname: orchestrator 2026-02-16 02:24:05.039237 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-16 02:24:05.039259 | orchestrator | Username: zuul-testbed03 2026-02-16 02:24:05.039280 | orchestrator | Distro: Debian 12.13 2026-02-16 02:24:05.039304 | orchestrator | Provider: static-testbed 2026-02-16 02:24:05.039325 | orchestrator | Region: 2026-02-16 02:24:05.039346 | orchestrator | Label: testbed-orchestrator 2026-02-16 02:24:05.039367 | orchestrator | Product Name: OpenStack Nova 2026-02-16 02:24:05.039387 | orchestrator | Interface IP: 81.163.193.140 2026-02-16 02:24:05.065042 | 2026-02-16 02:24:05.065282 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-16 02:24:05.568093 | orchestrator -> localhost | changed 2026-02-16 02:24:05.584250 | 2026-02-16 02:24:05.584403 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-16 02:24:06.628821 | orchestrator -> localhost | changed 2026-02-16 02:24:06.650529 | 2026-02-16 02:24:06.650687 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-16 02:24:06.940985 | orchestrator -> localhost | ok 2026-02-16 02:24:06.954730 | 2026-02-16 02:24:06.954941 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-16 02:24:06.985591 | orchestrator | ok 2026-02-16 02:24:07.007446 | orchestrator | included: /var/lib/zuul/builds/38b924e1c53c45ae91259ec19ed86344/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-16 02:24:07.015560 | 2026-02-16 02:24:07.015657 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-16 02:24:10.404915 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-16 02:24:10.405213 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/38b924e1c53c45ae91259ec19ed86344/work/38b924e1c53c45ae91259ec19ed86344_id_rsa 2026-02-16 02:24:10.405264 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/38b924e1c53c45ae91259ec19ed86344/work/38b924e1c53c45ae91259ec19ed86344_id_rsa.pub 2026-02-16 02:24:10.405292 | orchestrator -> localhost | The key fingerprint is: 2026-02-16 02:24:10.405318 | orchestrator -> localhost | SHA256:aGR6xyfbeZ2gHjhPo2yTKojo7jy23gX5Rs+MH8jfR0c zuul-build-sshkey 2026-02-16 02:24:10.405341 | orchestrator -> localhost | The key's randomart image is: 2026-02-16 02:24:10.405377 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-16 02:24:10.405400 | orchestrator -> localhost | | | 2026-02-16 02:24:10.405422 | orchestrator -> localhost | | | 2026-02-16 02:24:10.405443 | orchestrator -> localhost | | o | 2026-02-16 02:24:10.405463 | orchestrator -> localhost | | .+ o E | 2026-02-16 02:24:10.405483 | orchestrator -> localhost | | o..+ S . o | 2026-02-16 02:24:10.405514 | orchestrator -> localhost | | =o*. * + + . | 2026-02-16 02:24:10.405535 | orchestrator -> localhost | |.. . B =+.O o o | 2026-02-16 02:24:10.405555 | orchestrator -> localhost | |+oo + o.** = | 2026-02-16 02:24:10.405577 | orchestrator -> localhost | |*Bo. ..=oo+ | 2026-02-16 02:24:10.405598 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-16 02:24:10.405653 | orchestrator -> localhost | ok: Runtime: 0:00:02.886839 2026-02-16 02:24:10.413308 | 2026-02-16 02:24:10.413426 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-16 02:24:10.444536 | orchestrator | ok 2026-02-16 02:24:10.455272 | orchestrator | included: /var/lib/zuul/builds/38b924e1c53c45ae91259ec19ed86344/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-16 02:24:10.464880 | 2026-02-16 02:24:10.465101 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-16 02:24:10.488992 | orchestrator | skipping: Conditional result was False 2026-02-16 02:24:10.497370 | 2026-02-16 02:24:10.497484 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-16 02:24:11.273264 | orchestrator | changed 2026-02-16 02:24:11.281414 | 2026-02-16 02:24:11.281956 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-16 02:24:11.647058 | orchestrator | ok 2026-02-16 02:24:11.659446 | 2026-02-16 02:24:11.659613 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-16 02:24:12.152512 | orchestrator | ok 2026-02-16 02:24:12.159914 | 2026-02-16 02:24:12.160032 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-16 02:24:12.600042 | orchestrator | ok 2026-02-16 02:24:12.608651 | 2026-02-16 02:24:12.608788 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-16 02:24:12.633280 | orchestrator | skipping: Conditional result was False 2026-02-16 02:24:12.646320 | 2026-02-16 02:24:12.646471 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-16 02:24:13.096397 | orchestrator -> localhost | changed 2026-02-16 02:24:13.114300 | 2026-02-16 02:24:13.114435 | TASK [add-build-sshkey : Add back temp key] 2026-02-16 02:24:13.441975 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/38b924e1c53c45ae91259ec19ed86344/work/38b924e1c53c45ae91259ec19ed86344_id_rsa (zuul-build-sshkey) 2026-02-16 02:24:13.442239 | orchestrator -> localhost | ok: Runtime: 0:00:00.018226 2026-02-16 02:24:13.450741 | 2026-02-16 02:24:13.450871 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-16 02:24:13.896107 | orchestrator | ok 2026-02-16 02:24:13.904528 | 2026-02-16 02:24:13.904665 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-16 02:24:13.941044 | orchestrator | skipping: Conditional result was False 2026-02-16 02:24:13.996333 | 2026-02-16 02:24:13.996470 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-16 02:24:14.431606 | orchestrator | ok 2026-02-16 02:24:14.443597 | 2026-02-16 02:24:14.443726 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-16 02:24:14.484761 | orchestrator | ok 2026-02-16 02:24:14.492813 | 2026-02-16 02:24:14.492919 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-16 02:24:14.799915 | orchestrator -> localhost | ok 2026-02-16 02:24:14.807409 | 2026-02-16 02:24:14.807520 | TASK [validate-host : Collect information about the host] 2026-02-16 02:24:16.152336 | orchestrator | ok 2026-02-16 02:24:16.169597 | 2026-02-16 02:24:16.169717 | TASK [validate-host : Sanitize hostname] 2026-02-16 02:24:16.258203 | orchestrator | ok 2026-02-16 02:24:16.268770 | 2026-02-16 02:24:16.268957 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-16 02:24:16.882994 | orchestrator -> localhost | changed 2026-02-16 02:24:16.897705 | 2026-02-16 02:24:16.897885 | TASK [validate-host : Collect information about zuul worker] 2026-02-16 02:24:17.422475 | orchestrator | ok 2026-02-16 02:24:17.431651 | 2026-02-16 02:24:17.431799 | TASK [validate-host : Write out all zuul information for each host] 2026-02-16 02:24:17.996546 | orchestrator -> localhost | changed 2026-02-16 02:24:18.018319 | 2026-02-16 02:24:18.018469 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-16 02:24:18.491187 | orchestrator | ok 2026-02-16 02:24:18.501710 | 2026-02-16 02:24:18.501851 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-16 02:24:40.978079 | orchestrator | changed: 2026-02-16 02:24:40.978348 | orchestrator | .d..t...... src/ 2026-02-16 02:24:40.978387 | orchestrator | .d..t...... src/github.com/ 2026-02-16 02:24:40.978413 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-16 02:24:40.978498 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-16 02:24:40.978521 | orchestrator | RedHat.yml 2026-02-16 02:24:40.994816 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-16 02:24:40.994848 | orchestrator | RedHat.yml 2026-02-16 02:24:40.994902 | orchestrator | = 1.53.0"... 2026-02-16 02:24:51.271255 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-02-16 02:24:51.290630 | orchestrator | - Finding latest version of hashicorp/null... 2026-02-16 02:24:51.632418 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-16 02:24:55.080378 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-16 02:24:55.144200 | orchestrator | - Installing hashicorp/local v2.6.2... 2026-02-16 02:24:55.686672 | orchestrator | - Installed hashicorp/local v2.6.2 (signed, key ID 0C0AF313E5FD9F80) 2026-02-16 02:24:55.746255 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-16 02:24:56.209062 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-16 02:24:56.209111 | orchestrator | 2026-02-16 02:24:56.209117 | orchestrator | Providers are signed by their developers. 2026-02-16 02:24:56.209122 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-16 02:24:56.209127 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-16 02:24:56.209139 | orchestrator | 2026-02-16 02:24:56.209144 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-16 02:24:56.209149 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-16 02:24:56.209179 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-16 02:24:56.209183 | orchestrator | you run "tofu init" in the future. 2026-02-16 02:24:56.209447 | orchestrator | 2026-02-16 02:24:56.209457 | orchestrator | OpenTofu has been successfully initialized! 2026-02-16 02:24:56.209473 | orchestrator | 2026-02-16 02:24:56.209478 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-16 02:24:56.209482 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-16 02:24:56.209486 | orchestrator | should now work. 2026-02-16 02:24:56.209490 | orchestrator | 2026-02-16 02:24:56.209497 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-16 02:24:56.209501 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-16 02:24:56.209506 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-16 02:24:56.393701 | orchestrator | Created and switched to workspace "ci"! 2026-02-16 02:24:56.393760 | orchestrator | 2026-02-16 02:24:56.393767 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-16 02:24:56.393772 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-16 02:24:56.393776 | orchestrator | for this configuration. 2026-02-16 02:24:56.521901 | orchestrator | ci.auto.tfvars 2026-02-16 02:24:56.524316 | orchestrator | default_custom.tf 2026-02-16 02:24:57.508444 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-16 02:24:58.014692 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-16 02:24:58.231051 | orchestrator | 2026-02-16 02:24:58.231195 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-16 02:24:58.231217 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-16 02:24:58.231233 | orchestrator | + create 2026-02-16 02:24:58.231246 | orchestrator | <= read (data resources) 2026-02-16 02:24:58.231265 | orchestrator | 2026-02-16 02:24:58.231284 | orchestrator | OpenTofu will perform the following actions: 2026-02-16 02:24:58.231321 | orchestrator | 2026-02-16 02:24:58.231342 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-16 02:24:58.231363 | orchestrator | # (config refers to values not yet known) 2026-02-16 02:24:58.231377 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-16 02:24:58.231390 | orchestrator | + checksum = (known after apply) 2026-02-16 02:24:58.231400 | orchestrator | + created_at = (known after apply) 2026-02-16 02:24:58.231411 | orchestrator | + file = (known after apply) 2026-02-16 02:24:58.231422 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.231464 | orchestrator | + metadata = (known after apply) 2026-02-16 02:24:58.231476 | orchestrator | + min_disk_gb = (known after apply) 2026-02-16 02:24:58.231487 | orchestrator | + min_ram_mb = (known after apply) 2026-02-16 02:24:58.231498 | orchestrator | + most_recent = true 2026-02-16 02:24:58.231510 | orchestrator | + name = (known after apply) 2026-02-16 02:24:58.231520 | orchestrator | + protected = (known after apply) 2026-02-16 02:24:58.231531 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.231545 | orchestrator | + schema = (known after apply) 2026-02-16 02:24:58.231591 | orchestrator | + size_bytes = (known after apply) 2026-02-16 02:24:58.231612 | orchestrator | + tags = (known after apply) 2026-02-16 02:24:58.231630 | orchestrator | + updated_at = (known after apply) 2026-02-16 02:24:58.231650 | orchestrator | } 2026-02-16 02:24:58.231664 | orchestrator | 2026-02-16 02:24:58.231675 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-16 02:24:58.231686 | orchestrator | # (config refers to values not yet known) 2026-02-16 02:24:58.231715 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-16 02:24:58.231727 | orchestrator | + checksum = (known after apply) 2026-02-16 02:24:58.231738 | orchestrator | + created_at = (known after apply) 2026-02-16 02:24:58.231749 | orchestrator | + file = (known after apply) 2026-02-16 02:24:58.231759 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.231770 | orchestrator | + metadata = (known after apply) 2026-02-16 02:24:58.231780 | orchestrator | + min_disk_gb = (known after apply) 2026-02-16 02:24:58.231791 | orchestrator | + min_ram_mb = (known after apply) 2026-02-16 02:24:58.231801 | orchestrator | + most_recent = true 2026-02-16 02:24:58.231813 | orchestrator | + name = (known after apply) 2026-02-16 02:24:58.231823 | orchestrator | + protected = (known after apply) 2026-02-16 02:24:58.231834 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.231845 | orchestrator | + schema = (known after apply) 2026-02-16 02:24:58.231855 | orchestrator | + size_bytes = (known after apply) 2026-02-16 02:24:58.231866 | orchestrator | + tags = (known after apply) 2026-02-16 02:24:58.231876 | orchestrator | + updated_at = (known after apply) 2026-02-16 02:24:58.231887 | orchestrator | } 2026-02-16 02:24:58.231898 | orchestrator | 2026-02-16 02:24:58.231909 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-16 02:24:58.231920 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-16 02:24:58.231930 | orchestrator | + content = (known after apply) 2026-02-16 02:24:58.231942 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-16 02:24:58.231952 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-16 02:24:58.231963 | orchestrator | + content_md5 = (known after apply) 2026-02-16 02:24:58.231974 | orchestrator | + content_sha1 = (known after apply) 2026-02-16 02:24:58.231984 | orchestrator | + content_sha256 = (known after apply) 2026-02-16 02:24:58.231995 | orchestrator | + content_sha512 = (known after apply) 2026-02-16 02:24:58.232005 | orchestrator | + directory_permission = "0777" 2026-02-16 02:24:58.232016 | orchestrator | + file_permission = "0644" 2026-02-16 02:24:58.232027 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-16 02:24:58.232037 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.232048 | orchestrator | } 2026-02-16 02:24:58.232066 | orchestrator | 2026-02-16 02:24:58.232078 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-16 02:24:58.232089 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-16 02:24:58.232100 | orchestrator | + content = (known after apply) 2026-02-16 02:24:58.232111 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-16 02:24:58.232121 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-16 02:24:58.232132 | orchestrator | + content_md5 = (known after apply) 2026-02-16 02:24:58.232142 | orchestrator | + content_sha1 = (known after apply) 2026-02-16 02:24:58.232153 | orchestrator | + content_sha256 = (known after apply) 2026-02-16 02:24:58.232164 | orchestrator | + content_sha512 = (known after apply) 2026-02-16 02:24:58.232174 | orchestrator | + directory_permission = "0777" 2026-02-16 02:24:58.232185 | orchestrator | + file_permission = "0644" 2026-02-16 02:24:58.232206 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-16 02:24:58.232217 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.232228 | orchestrator | } 2026-02-16 02:24:58.232238 | orchestrator | 2026-02-16 02:24:58.232263 | orchestrator | # local_file.inventory will be created 2026-02-16 02:24:58.232274 | orchestrator | + resource "local_file" "inventory" { 2026-02-16 02:24:58.232284 | orchestrator | + content = (known after apply) 2026-02-16 02:24:58.232295 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-16 02:24:58.232305 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-16 02:24:58.232316 | orchestrator | + content_md5 = (known after apply) 2026-02-16 02:24:58.232327 | orchestrator | + content_sha1 = (known after apply) 2026-02-16 02:24:58.232338 | orchestrator | + content_sha256 = (known after apply) 2026-02-16 02:24:58.232349 | orchestrator | + content_sha512 = (known after apply) 2026-02-16 02:24:58.232360 | orchestrator | + directory_permission = "0777" 2026-02-16 02:24:58.232370 | orchestrator | + file_permission = "0644" 2026-02-16 02:24:58.232381 | orchestrator | + filename = "inventory.ci" 2026-02-16 02:24:58.232391 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.232402 | orchestrator | } 2026-02-16 02:24:58.232412 | orchestrator | 2026-02-16 02:24:58.232423 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-16 02:24:58.232434 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-16 02:24:58.232445 | orchestrator | + content = (sensitive value) 2026-02-16 02:24:58.232455 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-16 02:24:58.232466 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-16 02:24:58.232477 | orchestrator | + content_md5 = (known after apply) 2026-02-16 02:24:58.232487 | orchestrator | + content_sha1 = (known after apply) 2026-02-16 02:24:58.232498 | orchestrator | + content_sha256 = (known after apply) 2026-02-16 02:24:58.232508 | orchestrator | + content_sha512 = (known after apply) 2026-02-16 02:24:58.232519 | orchestrator | + directory_permission = "0700" 2026-02-16 02:24:58.232530 | orchestrator | + file_permission = "0600" 2026-02-16 02:24:58.232540 | orchestrator | + filename = ".id_rsa.ci" 2026-02-16 02:24:58.232551 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.232588 | orchestrator | } 2026-02-16 02:24:58.232599 | orchestrator | 2026-02-16 02:24:58.232610 | orchestrator | # null_resource.node_semaphore will be created 2026-02-16 02:24:58.232621 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-16 02:24:58.232631 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.232642 | orchestrator | } 2026-02-16 02:24:58.232653 | orchestrator | 2026-02-16 02:24:58.232664 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-16 02:24:58.232675 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-16 02:24:58.232685 | orchestrator | + attachment = (known after apply) 2026-02-16 02:24:58.232696 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.232707 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.232717 | orchestrator | + image_id = (known after apply) 2026-02-16 02:24:58.232728 | orchestrator | + metadata = (known after apply) 2026-02-16 02:24:58.232738 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-16 02:24:58.232749 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.232760 | orchestrator | + size = 80 2026-02-16 02:24:58.232770 | orchestrator | + volume_retype_policy = "never" 2026-02-16 02:24:58.232781 | orchestrator | + volume_type = "ssd" 2026-02-16 02:24:58.232791 | orchestrator | } 2026-02-16 02:24:58.232802 | orchestrator | 2026-02-16 02:24:58.232813 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-16 02:24:58.232823 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-16 02:24:58.232834 | orchestrator | + attachment = (known after apply) 2026-02-16 02:24:58.232844 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.232855 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.232873 | orchestrator | + image_id = (known after apply) 2026-02-16 02:24:58.232884 | orchestrator | + metadata = (known after apply) 2026-02-16 02:24:58.232895 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-16 02:24:58.232905 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.232916 | orchestrator | + size = 80 2026-02-16 02:24:58.232927 | orchestrator | + volume_retype_policy = "never" 2026-02-16 02:24:58.232937 | orchestrator | + volume_type = "ssd" 2026-02-16 02:24:58.232948 | orchestrator | } 2026-02-16 02:24:58.232959 | orchestrator | 2026-02-16 02:24:58.232969 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-16 02:24:58.232980 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-16 02:24:58.232991 | orchestrator | + attachment = (known after apply) 2026-02-16 02:24:58.233001 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.233011 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.233022 | orchestrator | + image_id = (known after apply) 2026-02-16 02:24:58.233033 | orchestrator | + metadata = (known after apply) 2026-02-16 02:24:58.233043 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-16 02:24:58.233054 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.233064 | orchestrator | + size = 80 2026-02-16 02:24:58.233074 | orchestrator | + volume_retype_policy = "never" 2026-02-16 02:24:58.233085 | orchestrator | + volume_type = "ssd" 2026-02-16 02:24:58.233096 | orchestrator | } 2026-02-16 02:24:58.233107 | orchestrator | 2026-02-16 02:24:58.233117 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-16 02:24:58.233128 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-16 02:24:58.233138 | orchestrator | + attachment = (known after apply) 2026-02-16 02:24:58.233149 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.233169 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.233180 | orchestrator | + image_id = (known after apply) 2026-02-16 02:24:58.233190 | orchestrator | + metadata = (known after apply) 2026-02-16 02:24:58.233201 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-16 02:24:58.233212 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.233222 | orchestrator | + size = 80 2026-02-16 02:24:58.233232 | orchestrator | + volume_retype_policy = "never" 2026-02-16 02:24:58.233243 | orchestrator | + volume_type = "ssd" 2026-02-16 02:24:58.233254 | orchestrator | } 2026-02-16 02:24:58.233264 | orchestrator | 2026-02-16 02:24:58.233275 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-16 02:24:58.233285 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-16 02:24:58.233296 | orchestrator | + attachment = (known after apply) 2026-02-16 02:24:58.233306 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.233317 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.233327 | orchestrator | + image_id = (known after apply) 2026-02-16 02:24:58.233338 | orchestrator | + metadata = (known after apply) 2026-02-16 02:24:58.233354 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-16 02:24:58.233365 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.233375 | orchestrator | + size = 80 2026-02-16 02:24:58.233386 | orchestrator | + volume_retype_policy = "never" 2026-02-16 02:24:58.233397 | orchestrator | + volume_type = "ssd" 2026-02-16 02:24:58.233407 | orchestrator | } 2026-02-16 02:24:58.233418 | orchestrator | 2026-02-16 02:24:58.233428 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-16 02:24:58.233439 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-16 02:24:58.233450 | orchestrator | + attachment = (known after apply) 2026-02-16 02:24:58.233461 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.233471 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.233489 | orchestrator | + image_id = (known after apply) 2026-02-16 02:24:58.233500 | orchestrator | + metadata = (known after apply) 2026-02-16 02:24:58.233511 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-16 02:24:58.233521 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.233532 | orchestrator | + size = 80 2026-02-16 02:24:58.233542 | orchestrator | + volume_retype_policy = "never" 2026-02-16 02:24:58.233598 | orchestrator | + volume_type = "ssd" 2026-02-16 02:24:58.233613 | orchestrator | } 2026-02-16 02:24:58.233624 | orchestrator | 2026-02-16 02:24:58.233635 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-16 02:24:58.233645 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-16 02:24:58.233656 | orchestrator | + attachment = (known after apply) 2026-02-16 02:24:58.233667 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.233677 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.233688 | orchestrator | + image_id = (known after apply) 2026-02-16 02:24:58.233699 | orchestrator | + metadata = (known after apply) 2026-02-16 02:24:58.233710 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-16 02:24:58.233720 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.233731 | orchestrator | + size = 80 2026-02-16 02:24:58.233742 | orchestrator | + volume_retype_policy = "never" 2026-02-16 02:24:58.233752 | orchestrator | + volume_type = "ssd" 2026-02-16 02:24:58.233762 | orchestrator | } 2026-02-16 02:24:58.233773 | orchestrator | 2026-02-16 02:24:58.233784 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-16 02:24:58.233795 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-16 02:24:58.233806 | orchestrator | + attachment = (known after apply) 2026-02-16 02:24:58.233816 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.233827 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.233837 | orchestrator | + metadata = (known after apply) 2026-02-16 02:24:58.233848 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-16 02:24:58.233859 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.233869 | orchestrator | + size = 20 2026-02-16 02:24:58.233880 | orchestrator | + volume_retype_policy = "never" 2026-02-16 02:24:58.233891 | orchestrator | + volume_type = "ssd" 2026-02-16 02:24:58.233901 | orchestrator | } 2026-02-16 02:24:58.233912 | orchestrator | 2026-02-16 02:24:58.233923 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-16 02:24:58.233933 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-16 02:24:58.233944 | orchestrator | + attachment = (known after apply) 2026-02-16 02:24:58.233955 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.233965 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.233976 | orchestrator | + metadata = (known after apply) 2026-02-16 02:24:58.233986 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-16 02:24:58.233997 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.234007 | orchestrator | + size = 20 2026-02-16 02:24:58.234056 | orchestrator | + volume_retype_policy = "never" 2026-02-16 02:24:58.234067 | orchestrator | + volume_type = "ssd" 2026-02-16 02:24:58.234078 | orchestrator | } 2026-02-16 02:24:58.234089 | orchestrator | 2026-02-16 02:24:58.234100 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-16 02:24:58.234110 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-16 02:24:58.234121 | orchestrator | + attachment = (known after apply) 2026-02-16 02:24:58.234132 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.234142 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.234153 | orchestrator | + metadata = (known after apply) 2026-02-16 02:24:58.234163 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-16 02:24:58.234174 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.234192 | orchestrator | + size = 20 2026-02-16 02:24:58.234203 | orchestrator | + volume_retype_policy = "never" 2026-02-16 02:24:58.234213 | orchestrator | + volume_type = "ssd" 2026-02-16 02:24:58.234224 | orchestrator | } 2026-02-16 02:24:58.234235 | orchestrator | 2026-02-16 02:24:58.234245 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-16 02:24:58.234256 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-16 02:24:58.234267 | orchestrator | + attachment = (known after apply) 2026-02-16 02:24:58.234277 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.234296 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.234307 | orchestrator | + metadata = (known after apply) 2026-02-16 02:24:58.234334 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-16 02:24:58.234345 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.234355 | orchestrator | + size = 20 2026-02-16 02:24:58.234366 | orchestrator | + volume_retype_policy = "never" 2026-02-16 02:24:58.234377 | orchestrator | + volume_type = "ssd" 2026-02-16 02:24:58.234387 | orchestrator | } 2026-02-16 02:24:58.234398 | orchestrator | 2026-02-16 02:24:58.234409 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-16 02:24:58.234419 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-16 02:24:58.234430 | orchestrator | + attachment = (known after apply) 2026-02-16 02:24:58.234441 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.234451 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.234462 | orchestrator | + metadata = (known after apply) 2026-02-16 02:24:58.234472 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-16 02:24:58.234483 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.234500 | orchestrator | + size = 20 2026-02-16 02:24:58.234511 | orchestrator | + volume_retype_policy = "never" 2026-02-16 02:24:58.234521 | orchestrator | + volume_type = "ssd" 2026-02-16 02:24:58.234532 | orchestrator | } 2026-02-16 02:24:58.234542 | orchestrator | 2026-02-16 02:24:58.234601 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-16 02:24:58.234616 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-16 02:24:58.234627 | orchestrator | + attachment = (known after apply) 2026-02-16 02:24:58.234638 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.234648 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.234659 | orchestrator | + metadata = (known after apply) 2026-02-16 02:24:58.234669 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-16 02:24:58.234680 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.234691 | orchestrator | + size = 20 2026-02-16 02:24:58.234701 | orchestrator | + volume_retype_policy = "never" 2026-02-16 02:24:58.234712 | orchestrator | + volume_type = "ssd" 2026-02-16 02:24:58.234723 | orchestrator | } 2026-02-16 02:24:58.234733 | orchestrator | 2026-02-16 02:24:58.234744 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-16 02:24:58.234755 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-16 02:24:58.234765 | orchestrator | + attachment = (known after apply) 2026-02-16 02:24:58.234776 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.234787 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.234798 | orchestrator | + metadata = (known after apply) 2026-02-16 02:24:58.234808 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-16 02:24:58.234819 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.234829 | orchestrator | + size = 20 2026-02-16 02:24:58.234840 | orchestrator | + volume_retype_policy = "never" 2026-02-16 02:24:58.234851 | orchestrator | + volume_type = "ssd" 2026-02-16 02:24:58.234861 | orchestrator | } 2026-02-16 02:24:58.234872 | orchestrator | 2026-02-16 02:24:58.234883 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-16 02:24:58.234894 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-16 02:24:58.234913 | orchestrator | + attachment = (known after apply) 2026-02-16 02:24:58.234923 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.234934 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.234945 | orchestrator | + metadata = (known after apply) 2026-02-16 02:24:58.234955 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-16 02:24:58.234966 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.234977 | orchestrator | + size = 20 2026-02-16 02:24:58.234988 | orchestrator | + volume_retype_policy = "never" 2026-02-16 02:24:58.234999 | orchestrator | + volume_type = "ssd" 2026-02-16 02:24:58.235009 | orchestrator | } 2026-02-16 02:24:58.235020 | orchestrator | 2026-02-16 02:24:58.235031 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-16 02:24:58.235042 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-16 02:24:58.235052 | orchestrator | + attachment = (known after apply) 2026-02-16 02:24:58.235063 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.235074 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.235084 | orchestrator | + metadata = (known after apply) 2026-02-16 02:24:58.235095 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-16 02:24:58.235105 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.235116 | orchestrator | + size = 20 2026-02-16 02:24:58.235127 | orchestrator | + volume_retype_policy = "never" 2026-02-16 02:24:58.235138 | orchestrator | + volume_type = "ssd" 2026-02-16 02:24:58.235148 | orchestrator | } 2026-02-16 02:24:58.235159 | orchestrator | 2026-02-16 02:24:58.235170 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-16 02:24:58.235180 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-16 02:24:58.235191 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-16 02:24:58.235202 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-16 02:24:58.235212 | orchestrator | + all_metadata = (known after apply) 2026-02-16 02:24:58.235223 | orchestrator | + all_tags = (known after apply) 2026-02-16 02:24:58.235233 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.235244 | orchestrator | + config_drive = true 2026-02-16 02:24:58.235255 | orchestrator | + created = (known after apply) 2026-02-16 02:24:58.235265 | orchestrator | + flavor_id = (known after apply) 2026-02-16 02:24:58.235276 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-16 02:24:58.235287 | orchestrator | + force_delete = false 2026-02-16 02:24:58.235297 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-16 02:24:58.235308 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.235318 | orchestrator | + image_id = (known after apply) 2026-02-16 02:24:58.235329 | orchestrator | + image_name = (known after apply) 2026-02-16 02:24:58.235339 | orchestrator | + key_pair = "testbed" 2026-02-16 02:24:58.235350 | orchestrator | + name = "testbed-manager" 2026-02-16 02:24:58.235360 | orchestrator | + power_state = "active" 2026-02-16 02:24:58.235371 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.235382 | orchestrator | + security_groups = (known after apply) 2026-02-16 02:24:58.235392 | orchestrator | + stop_before_destroy = false 2026-02-16 02:24:58.235409 | orchestrator | + updated = (known after apply) 2026-02-16 02:24:58.235420 | orchestrator | + user_data = (sensitive value) 2026-02-16 02:24:58.235431 | orchestrator | 2026-02-16 02:24:58.235442 | orchestrator | + block_device { 2026-02-16 02:24:58.235453 | orchestrator | + boot_index = 0 2026-02-16 02:24:58.235464 | orchestrator | + delete_on_termination = false 2026-02-16 02:24:58.235481 | orchestrator | + destination_type = "volume" 2026-02-16 02:24:58.235492 | orchestrator | + multiattach = false 2026-02-16 02:24:58.235502 | orchestrator | + source_type = "volume" 2026-02-16 02:24:58.235513 | orchestrator | + uuid = (known after apply) 2026-02-16 02:24:58.235531 | orchestrator | } 2026-02-16 02:24:58.235542 | orchestrator | 2026-02-16 02:24:58.235690 | orchestrator | + network { 2026-02-16 02:24:58.235735 | orchestrator | + access_network = false 2026-02-16 02:24:58.235746 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-16 02:24:58.235757 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-16 02:24:58.235768 | orchestrator | + mac = (known after apply) 2026-02-16 02:24:58.235779 | orchestrator | + name = (known after apply) 2026-02-16 02:24:58.235789 | orchestrator | + port = (known after apply) 2026-02-16 02:24:58.235800 | orchestrator | + uuid = (known after apply) 2026-02-16 02:24:58.235811 | orchestrator | } 2026-02-16 02:24:58.235822 | orchestrator | } 2026-02-16 02:24:58.235832 | orchestrator | 2026-02-16 02:24:58.235844 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-16 02:24:58.235855 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-16 02:24:58.235866 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-16 02:24:58.235876 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-16 02:24:58.235887 | orchestrator | + all_metadata = (known after apply) 2026-02-16 02:24:58.235897 | orchestrator | + all_tags = (known after apply) 2026-02-16 02:24:58.235908 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.235918 | orchestrator | + config_drive = true 2026-02-16 02:24:58.235929 | orchestrator | + created = (known after apply) 2026-02-16 02:24:58.235939 | orchestrator | + flavor_id = (known after apply) 2026-02-16 02:24:58.235950 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-16 02:24:58.235960 | orchestrator | + force_delete = false 2026-02-16 02:24:58.235971 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-16 02:24:58.235982 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.235993 | orchestrator | + image_id = (known after apply) 2026-02-16 02:24:58.236003 | orchestrator | + image_name = (known after apply) 2026-02-16 02:24:58.236014 | orchestrator | + key_pair = "testbed" 2026-02-16 02:24:58.236024 | orchestrator | + name = "testbed-node-0" 2026-02-16 02:24:58.236035 | orchestrator | + power_state = "active" 2026-02-16 02:24:58.236046 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.236056 | orchestrator | + security_groups = (known after apply) 2026-02-16 02:24:58.236067 | orchestrator | + stop_before_destroy = false 2026-02-16 02:24:58.236077 | orchestrator | + updated = (known after apply) 2026-02-16 02:24:58.236088 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-16 02:24:58.236099 | orchestrator | 2026-02-16 02:24:58.236110 | orchestrator | + block_device { 2026-02-16 02:24:58.236121 | orchestrator | + boot_index = 0 2026-02-16 02:24:58.236131 | orchestrator | + delete_on_termination = false 2026-02-16 02:24:58.236142 | orchestrator | + destination_type = "volume" 2026-02-16 02:24:58.236152 | orchestrator | + multiattach = false 2026-02-16 02:24:58.236161 | orchestrator | + source_type = "volume" 2026-02-16 02:24:58.236170 | orchestrator | + uuid = (known after apply) 2026-02-16 02:24:58.236180 | orchestrator | } 2026-02-16 02:24:58.236190 | orchestrator | 2026-02-16 02:24:58.236199 | orchestrator | + network { 2026-02-16 02:24:58.236208 | orchestrator | + access_network = false 2026-02-16 02:24:58.236218 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-16 02:24:58.236228 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-16 02:24:58.236237 | orchestrator | + mac = (known after apply) 2026-02-16 02:24:58.236247 | orchestrator | + name = (known after apply) 2026-02-16 02:24:58.236256 | orchestrator | + port = (known after apply) 2026-02-16 02:24:58.236266 | orchestrator | + uuid = (known after apply) 2026-02-16 02:24:58.236275 | orchestrator | } 2026-02-16 02:24:58.236285 | orchestrator | } 2026-02-16 02:24:58.236295 | orchestrator | 2026-02-16 02:24:58.236304 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-16 02:24:58.236314 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-16 02:24:58.236323 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-16 02:24:58.236349 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-16 02:24:58.236358 | orchestrator | + all_metadata = (known after apply) 2026-02-16 02:24:58.236368 | orchestrator | + all_tags = (known after apply) 2026-02-16 02:24:58.236377 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.236387 | orchestrator | + config_drive = true 2026-02-16 02:24:58.236396 | orchestrator | + created = (known after apply) 2026-02-16 02:24:58.236405 | orchestrator | + flavor_id = (known after apply) 2026-02-16 02:24:58.236415 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-16 02:24:58.236424 | orchestrator | + force_delete = false 2026-02-16 02:24:58.236434 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-16 02:24:58.236443 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.236452 | orchestrator | + image_id = (known after apply) 2026-02-16 02:24:58.236462 | orchestrator | + image_name = (known after apply) 2026-02-16 02:24:58.236471 | orchestrator | + key_pair = "testbed" 2026-02-16 02:24:58.236481 | orchestrator | + name = "testbed-node-1" 2026-02-16 02:24:58.236490 | orchestrator | + power_state = "active" 2026-02-16 02:24:58.236499 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.236509 | orchestrator | + security_groups = (known after apply) 2026-02-16 02:24:58.236518 | orchestrator | + stop_before_destroy = false 2026-02-16 02:24:58.236528 | orchestrator | + updated = (known after apply) 2026-02-16 02:24:58.236537 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-16 02:24:58.236547 | orchestrator | 2026-02-16 02:24:58.236581 | orchestrator | + block_device { 2026-02-16 02:24:58.236591 | orchestrator | + boot_index = 0 2026-02-16 02:24:58.236601 | orchestrator | + delete_on_termination = false 2026-02-16 02:24:58.236610 | orchestrator | + destination_type = "volume" 2026-02-16 02:24:58.236620 | orchestrator | + multiattach = false 2026-02-16 02:24:58.236629 | orchestrator | + source_type = "volume" 2026-02-16 02:24:58.236638 | orchestrator | + uuid = (known after apply) 2026-02-16 02:24:58.236648 | orchestrator | } 2026-02-16 02:24:58.236657 | orchestrator | 2026-02-16 02:24:58.236667 | orchestrator | + network { 2026-02-16 02:24:58.236686 | orchestrator | + access_network = false 2026-02-16 02:24:58.236696 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-16 02:24:58.236706 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-16 02:24:58.236715 | orchestrator | + mac = (known after apply) 2026-02-16 02:24:58.236725 | orchestrator | + name = (known after apply) 2026-02-16 02:24:58.236734 | orchestrator | + port = (known after apply) 2026-02-16 02:24:58.236744 | orchestrator | + uuid = (known after apply) 2026-02-16 02:24:58.236753 | orchestrator | } 2026-02-16 02:24:58.236763 | orchestrator | } 2026-02-16 02:24:58.236772 | orchestrator | 2026-02-16 02:24:58.236782 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-16 02:24:58.236792 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-16 02:24:58.236801 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-16 02:24:58.236810 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-16 02:24:58.236821 | orchestrator | + all_metadata = (known after apply) 2026-02-16 02:24:58.236830 | orchestrator | + all_tags = (known after apply) 2026-02-16 02:24:58.236848 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.236857 | orchestrator | + config_drive = true 2026-02-16 02:24:58.236867 | orchestrator | + created = (known after apply) 2026-02-16 02:24:58.236876 | orchestrator | + flavor_id = (known after apply) 2026-02-16 02:24:58.236886 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-16 02:24:58.236895 | orchestrator | + force_delete = false 2026-02-16 02:24:58.236904 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-16 02:24:58.236914 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.236923 | orchestrator | + image_id = (known after apply) 2026-02-16 02:24:58.236939 | orchestrator | + image_name = (known after apply) 2026-02-16 02:24:58.236948 | orchestrator | + key_pair = "testbed" 2026-02-16 02:24:58.236957 | orchestrator | + name = "testbed-node-2" 2026-02-16 02:24:58.236967 | orchestrator | + power_state = "active" 2026-02-16 02:24:58.236976 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.236986 | orchestrator | + security_groups = (known after apply) 2026-02-16 02:24:58.236995 | orchestrator | + stop_before_destroy = false 2026-02-16 02:24:58.237004 | orchestrator | + updated = (known after apply) 2026-02-16 02:24:58.237014 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-16 02:24:58.237024 | orchestrator | 2026-02-16 02:24:58.237033 | orchestrator | + block_device { 2026-02-16 02:24:58.237043 | orchestrator | + boot_index = 0 2026-02-16 02:24:58.237052 | orchestrator | + delete_on_termination = false 2026-02-16 02:24:58.237061 | orchestrator | + destination_type = "volume" 2026-02-16 02:24:58.237071 | orchestrator | + multiattach = false 2026-02-16 02:24:58.237080 | orchestrator | + source_type = "volume" 2026-02-16 02:24:58.237089 | orchestrator | + uuid = (known after apply) 2026-02-16 02:24:58.237099 | orchestrator | } 2026-02-16 02:24:58.237109 | orchestrator | 2026-02-16 02:24:58.237118 | orchestrator | + network { 2026-02-16 02:24:58.237127 | orchestrator | + access_network = false 2026-02-16 02:24:58.237137 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-16 02:24:58.237146 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-16 02:24:58.237155 | orchestrator | + mac = (known after apply) 2026-02-16 02:24:58.237165 | orchestrator | + name = (known after apply) 2026-02-16 02:24:58.237174 | orchestrator | + port = (known after apply) 2026-02-16 02:24:58.237184 | orchestrator | + uuid = (known after apply) 2026-02-16 02:24:58.237193 | orchestrator | } 2026-02-16 02:24:58.237203 | orchestrator | } 2026-02-16 02:24:58.237212 | orchestrator | 2026-02-16 02:24:58.237222 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-16 02:24:58.237231 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-16 02:24:58.237241 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-16 02:24:58.237250 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-16 02:24:58.237260 | orchestrator | + all_metadata = (known after apply) 2026-02-16 02:24:58.237269 | orchestrator | + all_tags = (known after apply) 2026-02-16 02:24:58.237278 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.237288 | orchestrator | + config_drive = true 2026-02-16 02:24:58.237297 | orchestrator | + created = (known after apply) 2026-02-16 02:24:58.237306 | orchestrator | + flavor_id = (known after apply) 2026-02-16 02:24:58.237316 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-16 02:24:58.237325 | orchestrator | + force_delete = false 2026-02-16 02:24:58.237334 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-16 02:24:58.237344 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.237353 | orchestrator | + image_id = (known after apply) 2026-02-16 02:24:58.237363 | orchestrator | + image_name = (known after apply) 2026-02-16 02:24:58.237372 | orchestrator | + key_pair = "testbed" 2026-02-16 02:24:58.237382 | orchestrator | + name = "testbed-node-3" 2026-02-16 02:24:58.237391 | orchestrator | + power_state = "active" 2026-02-16 02:24:58.237400 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.237409 | orchestrator | + security_groups = (known after apply) 2026-02-16 02:24:58.237419 | orchestrator | + stop_before_destroy = false 2026-02-16 02:24:58.237428 | orchestrator | + updated = (known after apply) 2026-02-16 02:24:58.237438 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-16 02:24:58.237447 | orchestrator | 2026-02-16 02:24:58.237457 | orchestrator | + block_device { 2026-02-16 02:24:58.237471 | orchestrator | + boot_index = 0 2026-02-16 02:24:58.237480 | orchestrator | + delete_on_termination = false 2026-02-16 02:24:58.237490 | orchestrator | + destination_type = "volume" 2026-02-16 02:24:58.237505 | orchestrator | + multiattach = false 2026-02-16 02:24:58.237515 | orchestrator | + source_type = "volume" 2026-02-16 02:24:58.237524 | orchestrator | + uuid = (known after apply) 2026-02-16 02:24:58.237534 | orchestrator | } 2026-02-16 02:24:58.237543 | orchestrator | 2026-02-16 02:24:58.237575 | orchestrator | + network { 2026-02-16 02:24:58.237586 | orchestrator | + access_network = false 2026-02-16 02:24:58.237595 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-16 02:24:58.237604 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-16 02:24:58.237614 | orchestrator | + mac = (known after apply) 2026-02-16 02:24:58.237623 | orchestrator | + name = (known after apply) 2026-02-16 02:24:58.237633 | orchestrator | + port = (known after apply) 2026-02-16 02:24:58.237642 | orchestrator | + uuid = (known after apply) 2026-02-16 02:24:58.237651 | orchestrator | } 2026-02-16 02:24:58.237661 | orchestrator | } 2026-02-16 02:24:58.237670 | orchestrator | 2026-02-16 02:24:58.237680 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-16 02:24:58.237695 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-16 02:24:58.237705 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-16 02:24:58.237715 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-16 02:24:58.237724 | orchestrator | + all_metadata = (known after apply) 2026-02-16 02:24:58.237734 | orchestrator | + all_tags = (known after apply) 2026-02-16 02:24:58.237743 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.237752 | orchestrator | + config_drive = true 2026-02-16 02:24:58.237762 | orchestrator | + created = (known after apply) 2026-02-16 02:24:58.237771 | orchestrator | + flavor_id = (known after apply) 2026-02-16 02:24:58.237781 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-16 02:24:58.237790 | orchestrator | + force_delete = false 2026-02-16 02:24:58.237799 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-16 02:24:58.237809 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.237818 | orchestrator | + image_id = (known after apply) 2026-02-16 02:24:58.237828 | orchestrator | + image_name = (known after apply) 2026-02-16 02:24:58.237837 | orchestrator | + key_pair = "testbed" 2026-02-16 02:24:58.237846 | orchestrator | + name = "testbed-node-4" 2026-02-16 02:24:58.237856 | orchestrator | + power_state = "active" 2026-02-16 02:24:58.237865 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.237875 | orchestrator | + security_groups = (known after apply) 2026-02-16 02:24:58.237884 | orchestrator | + stop_before_destroy = false 2026-02-16 02:24:58.237893 | orchestrator | + updated = (known after apply) 2026-02-16 02:24:58.237903 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-16 02:24:58.237912 | orchestrator | 2026-02-16 02:24:58.237922 | orchestrator | + block_device { 2026-02-16 02:24:58.237931 | orchestrator | + boot_index = 0 2026-02-16 02:24:58.237941 | orchestrator | + delete_on_termination = false 2026-02-16 02:24:58.237950 | orchestrator | + destination_type = "volume" 2026-02-16 02:24:58.237959 | orchestrator | + multiattach = false 2026-02-16 02:24:58.237969 | orchestrator | + source_type = "volume" 2026-02-16 02:24:58.237978 | orchestrator | + uuid = (known after apply) 2026-02-16 02:24:58.237988 | orchestrator | } 2026-02-16 02:24:58.237997 | orchestrator | 2026-02-16 02:24:58.238006 | orchestrator | + network { 2026-02-16 02:24:58.238045 | orchestrator | + access_network = false 2026-02-16 02:24:58.238057 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-16 02:24:58.238066 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-16 02:24:58.238076 | orchestrator | + mac = (known after apply) 2026-02-16 02:24:58.238085 | orchestrator | + name = (known after apply) 2026-02-16 02:24:58.238095 | orchestrator | + port = (known after apply) 2026-02-16 02:24:58.238104 | orchestrator | + uuid = (known after apply) 2026-02-16 02:24:58.238114 | orchestrator | } 2026-02-16 02:24:58.238123 | orchestrator | } 2026-02-16 02:24:58.238141 | orchestrator | 2026-02-16 02:24:58.238151 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-16 02:24:58.238160 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-16 02:24:58.238170 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-16 02:24:58.238179 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-16 02:24:58.238189 | orchestrator | + all_metadata = (known after apply) 2026-02-16 02:24:58.238198 | orchestrator | + all_tags = (known after apply) 2026-02-16 02:24:58.238207 | orchestrator | + availability_zone = "nova" 2026-02-16 02:24:58.238217 | orchestrator | + config_drive = true 2026-02-16 02:24:58.238226 | orchestrator | + created = (known after apply) 2026-02-16 02:24:58.238236 | orchestrator | + flavor_id = (known after apply) 2026-02-16 02:24:58.238245 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-16 02:24:58.238255 | orchestrator | + force_delete = false 2026-02-16 02:24:58.238269 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-16 02:24:58.238279 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.238288 | orchestrator | + image_id = (known after apply) 2026-02-16 02:24:58.238297 | orchestrator | + image_name = (known after apply) 2026-02-16 02:24:58.238307 | orchestrator | + key_pair = "testbed" 2026-02-16 02:24:58.238316 | orchestrator | + name = "testbed-node-5" 2026-02-16 02:24:58.238325 | orchestrator | + power_state = "active" 2026-02-16 02:24:58.238335 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.238344 | orchestrator | + security_groups = (known after apply) 2026-02-16 02:24:58.238354 | orchestrator | + stop_before_destroy = false 2026-02-16 02:24:58.238363 | orchestrator | + updated = (known after apply) 2026-02-16 02:24:58.238372 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-16 02:24:58.238382 | orchestrator | 2026-02-16 02:24:58.238391 | orchestrator | + block_device { 2026-02-16 02:24:58.238401 | orchestrator | + boot_index = 0 2026-02-16 02:24:58.238410 | orchestrator | + delete_on_termination = false 2026-02-16 02:24:58.238420 | orchestrator | + destination_type = "volume" 2026-02-16 02:24:58.238429 | orchestrator | + multiattach = false 2026-02-16 02:24:58.238439 | orchestrator | + source_type = "volume" 2026-02-16 02:24:58.238448 | orchestrator | + uuid = (known after apply) 2026-02-16 02:24:58.238457 | orchestrator | } 2026-02-16 02:24:58.238467 | orchestrator | 2026-02-16 02:24:58.238476 | orchestrator | + network { 2026-02-16 02:24:58.238486 | orchestrator | + access_network = false 2026-02-16 02:24:58.238495 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-16 02:24:58.238505 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-16 02:24:58.238514 | orchestrator | + mac = (known after apply) 2026-02-16 02:24:58.238523 | orchestrator | + name = (known after apply) 2026-02-16 02:24:58.238533 | orchestrator | + port = (known after apply) 2026-02-16 02:24:58.238542 | orchestrator | + uuid = (known after apply) 2026-02-16 02:24:58.238567 | orchestrator | } 2026-02-16 02:24:58.238578 | orchestrator | } 2026-02-16 02:24:58.238587 | orchestrator | 2026-02-16 02:24:58.238597 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-16 02:24:58.238607 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-16 02:24:58.238616 | orchestrator | + fingerprint = (known after apply) 2026-02-16 02:24:58.238626 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.238635 | orchestrator | + name = "testbed" 2026-02-16 02:24:58.238645 | orchestrator | + private_key = (sensitive value) 2026-02-16 02:24:58.238654 | orchestrator | + public_key = (known after apply) 2026-02-16 02:24:58.238664 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.238673 | orchestrator | + user_id = (known after apply) 2026-02-16 02:24:58.238682 | orchestrator | } 2026-02-16 02:24:58.238692 | orchestrator | 2026-02-16 02:24:58.238702 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-16 02:24:58.238712 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-16 02:24:58.238733 | orchestrator | + device = (known after apply) 2026-02-16 02:24:58.238743 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.238753 | orchestrator | + instance_id = (known after apply) 2026-02-16 02:24:58.238762 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.238771 | orchestrator | + volume_id = (known after apply) 2026-02-16 02:24:58.238781 | orchestrator | } 2026-02-16 02:24:58.238791 | orchestrator | 2026-02-16 02:24:58.238800 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-16 02:24:58.238810 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-16 02:24:58.238819 | orchestrator | + device = (known after apply) 2026-02-16 02:24:58.238829 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.238838 | orchestrator | + instance_id = (known after apply) 2026-02-16 02:24:58.238848 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.238857 | orchestrator | + volume_id = (known after apply) 2026-02-16 02:24:58.238866 | orchestrator | } 2026-02-16 02:24:58.238876 | orchestrator | 2026-02-16 02:24:58.238885 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-16 02:24:58.238895 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-16 02:24:58.238905 | orchestrator | + device = (known after apply) 2026-02-16 02:24:58.238921 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.238937 | orchestrator | + instance_id = (known after apply) 2026-02-16 02:24:58.238953 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.238969 | orchestrator | + volume_id = (known after apply) 2026-02-16 02:24:58.238986 | orchestrator | } 2026-02-16 02:24:58.239003 | orchestrator | 2026-02-16 02:24:58.239020 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-16 02:24:58.239037 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-16 02:24:58.239054 | orchestrator | + device = (known after apply) 2026-02-16 02:24:58.239072 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.239089 | orchestrator | + instance_id = (known after apply) 2026-02-16 02:24:58.239100 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.239109 | orchestrator | + volume_id = (known after apply) 2026-02-16 02:24:58.239119 | orchestrator | } 2026-02-16 02:24:58.239128 | orchestrator | 2026-02-16 02:24:58.239138 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-16 02:24:58.239148 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-16 02:24:58.239157 | orchestrator | + device = (known after apply) 2026-02-16 02:24:58.239167 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.239176 | orchestrator | + instance_id = (known after apply) 2026-02-16 02:24:58.239192 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.239202 | orchestrator | + volume_id = (known after apply) 2026-02-16 02:24:58.239211 | orchestrator | } 2026-02-16 02:24:58.239220 | orchestrator | 2026-02-16 02:24:58.239230 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-16 02:24:58.239239 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-16 02:24:58.239249 | orchestrator | + device = (known after apply) 2026-02-16 02:24:58.239258 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.239267 | orchestrator | + instance_id = (known after apply) 2026-02-16 02:24:58.239277 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.239286 | orchestrator | + volume_id = (known after apply) 2026-02-16 02:24:58.239295 | orchestrator | } 2026-02-16 02:24:58.239305 | orchestrator | 2026-02-16 02:24:58.239314 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-16 02:24:58.239324 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-16 02:24:58.239333 | orchestrator | + device = (known after apply) 2026-02-16 02:24:58.239349 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.239365 | orchestrator | + instance_id = (known after apply) 2026-02-16 02:24:58.239382 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.239406 | orchestrator | + volume_id = (known after apply) 2026-02-16 02:24:58.239416 | orchestrator | } 2026-02-16 02:24:58.239426 | orchestrator | 2026-02-16 02:24:58.239435 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-16 02:24:58.239445 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-16 02:24:58.239454 | orchestrator | + device = (known after apply) 2026-02-16 02:24:58.239464 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.239473 | orchestrator | + instance_id = (known after apply) 2026-02-16 02:24:58.239483 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.239492 | orchestrator | + volume_id = (known after apply) 2026-02-16 02:24:58.239502 | orchestrator | } 2026-02-16 02:24:58.239511 | orchestrator | 2026-02-16 02:24:58.239521 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-16 02:24:58.239530 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-16 02:24:58.239539 | orchestrator | + device = (known after apply) 2026-02-16 02:24:58.239549 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.239638 | orchestrator | + instance_id = (known after apply) 2026-02-16 02:24:58.239649 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.239660 | orchestrator | + volume_id = (known after apply) 2026-02-16 02:24:58.239671 | orchestrator | } 2026-02-16 02:24:58.239682 | orchestrator | 2026-02-16 02:24:58.239692 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-16 02:24:58.239704 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-16 02:24:58.239715 | orchestrator | + fixed_ip = (known after apply) 2026-02-16 02:24:58.239726 | orchestrator | + floating_ip = (known after apply) 2026-02-16 02:24:58.239737 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.239747 | orchestrator | + port_id = (known after apply) 2026-02-16 02:24:58.239758 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.239768 | orchestrator | } 2026-02-16 02:24:58.239779 | orchestrator | 2026-02-16 02:24:58.239790 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-16 02:24:58.239800 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-16 02:24:58.239811 | orchestrator | + address = (known after apply) 2026-02-16 02:24:58.239822 | orchestrator | + all_tags = (known after apply) 2026-02-16 02:24:58.239832 | orchestrator | + dns_domain = (known after apply) 2026-02-16 02:24:58.239843 | orchestrator | + dns_name = (known after apply) 2026-02-16 02:24:58.239854 | orchestrator | + fixed_ip = (known after apply) 2026-02-16 02:24:58.239864 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.239875 | orchestrator | + pool = "public" 2026-02-16 02:24:58.239886 | orchestrator | + port_id = (known after apply) 2026-02-16 02:24:58.239909 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.239920 | orchestrator | + subnet_id = (known after apply) 2026-02-16 02:24:58.239931 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.239942 | orchestrator | } 2026-02-16 02:24:58.239952 | orchestrator | 2026-02-16 02:24:58.239963 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-16 02:24:58.239974 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-16 02:24:58.239985 | orchestrator | + admin_state_up = (known after apply) 2026-02-16 02:24:58.239996 | orchestrator | + all_tags = (known after apply) 2026-02-16 02:24:58.240006 | orchestrator | + availability_zone_hints = [ 2026-02-16 02:24:58.240017 | orchestrator | + "nova", 2026-02-16 02:24:58.240028 | orchestrator | ] 2026-02-16 02:24:58.240038 | orchestrator | + dns_domain = (known after apply) 2026-02-16 02:24:58.240049 | orchestrator | + external = (known after apply) 2026-02-16 02:24:58.240059 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.240070 | orchestrator | + mtu = (known after apply) 2026-02-16 02:24:58.240081 | orchestrator | + name = "net-testbed-management" 2026-02-16 02:24:58.240091 | orchestrator | + port_security_enabled = (known after apply) 2026-02-16 02:24:58.240110 | orchestrator | + qos_policy_id = (known after apply) 2026-02-16 02:24:58.240121 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.240131 | orchestrator | + shared = (known after apply) 2026-02-16 02:24:58.240140 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.240150 | orchestrator | + transparent_vlan = (known after apply) 2026-02-16 02:24:58.240159 | orchestrator | 2026-02-16 02:24:58.240169 | orchestrator | + segments (known after apply) 2026-02-16 02:24:58.240178 | orchestrator | } 2026-02-16 02:24:58.240188 | orchestrator | 2026-02-16 02:24:58.240198 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-16 02:24:58.240207 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-16 02:24:58.240217 | orchestrator | + admin_state_up = (known after apply) 2026-02-16 02:24:58.240226 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-16 02:24:58.240236 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-16 02:24:58.240251 | orchestrator | + all_tags = (known after apply) 2026-02-16 02:24:58.240261 | orchestrator | + device_id = (known after apply) 2026-02-16 02:24:58.240270 | orchestrator | + device_owner = (known after apply) 2026-02-16 02:24:58.240280 | orchestrator | + dns_assignment = (known after apply) 2026-02-16 02:24:58.240289 | orchestrator | + dns_name = (known after apply) 2026-02-16 02:24:58.240299 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.240308 | orchestrator | + mac_address = (known after apply) 2026-02-16 02:24:58.240318 | orchestrator | + network_id = (known after apply) 2026-02-16 02:24:58.240327 | orchestrator | + port_security_enabled = (known after apply) 2026-02-16 02:24:58.240336 | orchestrator | + qos_policy_id = (known after apply) 2026-02-16 02:24:58.240346 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.240355 | orchestrator | + security_group_ids = (known after apply) 2026-02-16 02:24:58.240365 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.240374 | orchestrator | 2026-02-16 02:24:58.240384 | orchestrator | + allowed_address_pairs { 2026-02-16 02:24:58.240394 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-16 02:24:58.240403 | orchestrator | } 2026-02-16 02:24:58.240413 | orchestrator | 2026-02-16 02:24:58.240422 | orchestrator | + binding (known after apply) 2026-02-16 02:24:58.240432 | orchestrator | 2026-02-16 02:24:58.240442 | orchestrator | + fixed_ip { 2026-02-16 02:24:58.240451 | orchestrator | + ip_address = "192.168.16.5" 2026-02-16 02:24:58.240461 | orchestrator | + subnet_id = (known after apply) 2026-02-16 02:24:58.240470 | orchestrator | } 2026-02-16 02:24:58.240479 | orchestrator | } 2026-02-16 02:24:58.240489 | orchestrator | 2026-02-16 02:24:58.240499 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-16 02:24:58.240508 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-16 02:24:58.240518 | orchestrator | + admin_state_up = (known after apply) 2026-02-16 02:24:58.240527 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-16 02:24:58.240537 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-16 02:24:58.240547 | orchestrator | + all_tags = (known after apply) 2026-02-16 02:24:58.240581 | orchestrator | + device_id = (known after apply) 2026-02-16 02:24:58.240592 | orchestrator | + device_owner = (known after apply) 2026-02-16 02:24:58.240601 | orchestrator | + dns_assignment = (known after apply) 2026-02-16 02:24:58.240611 | orchestrator | + dns_name = (known after apply) 2026-02-16 02:24:58.240621 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.240630 | orchestrator | + mac_address = (known after apply) 2026-02-16 02:24:58.240640 | orchestrator | + network_id = (known after apply) 2026-02-16 02:24:58.240649 | orchestrator | + port_security_enabled = (known after apply) 2026-02-16 02:24:58.240658 | orchestrator | + qos_policy_id = (known after apply) 2026-02-16 02:24:58.240668 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.240684 | orchestrator | + security_group_ids = (known after apply) 2026-02-16 02:24:58.240693 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.240703 | orchestrator | 2026-02-16 02:24:58.240712 | orchestrator | + allowed_address_pairs { 2026-02-16 02:24:58.240722 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-16 02:24:58.240732 | orchestrator | } 2026-02-16 02:24:58.240741 | orchestrator | + allowed_address_pairs { 2026-02-16 02:24:58.240751 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-16 02:24:58.240760 | orchestrator | } 2026-02-16 02:24:58.240770 | orchestrator | + allowed_address_pairs { 2026-02-16 02:24:58.240779 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-16 02:24:58.240789 | orchestrator | } 2026-02-16 02:24:58.240798 | orchestrator | 2026-02-16 02:24:58.240808 | orchestrator | + binding (known after apply) 2026-02-16 02:24:58.240817 | orchestrator | 2026-02-16 02:24:58.240827 | orchestrator | + fixed_ip { 2026-02-16 02:24:58.240836 | orchestrator | + ip_address = "192.168.16.10" 2026-02-16 02:24:58.240846 | orchestrator | + subnet_id = (known after apply) 2026-02-16 02:24:58.240855 | orchestrator | } 2026-02-16 02:24:58.240865 | orchestrator | } 2026-02-16 02:24:58.240874 | orchestrator | 2026-02-16 02:24:58.240884 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-16 02:24:58.240893 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-16 02:24:58.240903 | orchestrator | + admin_state_up = (known after apply) 2026-02-16 02:24:58.240912 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-16 02:24:58.240922 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-16 02:24:58.240937 | orchestrator | + all_tags = (known after apply) 2026-02-16 02:24:58.240964 | orchestrator | + device_id = (known after apply) 2026-02-16 02:24:58.240978 | orchestrator | + device_owner = (known after apply) 2026-02-16 02:24:58.240988 | orchestrator | + dns_assignment = (known after apply) 2026-02-16 02:24:58.240997 | orchestrator | + dns_name = (known after apply) 2026-02-16 02:24:58.241007 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.241016 | orchestrator | + mac_address = (known after apply) 2026-02-16 02:24:58.241026 | orchestrator | + network_id = (known after apply) 2026-02-16 02:24:58.241035 | orchestrator | + port_security_enabled = (known after apply) 2026-02-16 02:24:58.241045 | orchestrator | + qos_policy_id = (known after apply) 2026-02-16 02:24:58.241054 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.241064 | orchestrator | + security_group_ids = (known after apply) 2026-02-16 02:24:58.241073 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.241083 | orchestrator | 2026-02-16 02:24:58.241092 | orchestrator | + allowed_address_pairs { 2026-02-16 02:24:58.241102 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-16 02:24:58.241111 | orchestrator | } 2026-02-16 02:24:58.241121 | orchestrator | + allowed_address_pairs { 2026-02-16 02:24:58.241131 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-16 02:24:58.241140 | orchestrator | } 2026-02-16 02:24:58.241149 | orchestrator | + allowed_address_pairs { 2026-02-16 02:24:58.241159 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-16 02:24:58.241168 | orchestrator | } 2026-02-16 02:24:58.241178 | orchestrator | 2026-02-16 02:24:58.241187 | orchestrator | + binding (known after apply) 2026-02-16 02:24:58.241197 | orchestrator | 2026-02-16 02:24:58.241206 | orchestrator | + fixed_ip { 2026-02-16 02:24:58.241216 | orchestrator | + ip_address = "192.168.16.11" 2026-02-16 02:24:58.241226 | orchestrator | + subnet_id = (known after apply) 2026-02-16 02:24:58.241235 | orchestrator | } 2026-02-16 02:24:58.241245 | orchestrator | } 2026-02-16 02:24:58.241254 | orchestrator | 2026-02-16 02:24:58.241264 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-16 02:24:58.241273 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-16 02:24:58.241283 | orchestrator | + admin_state_up = (known after apply) 2026-02-16 02:24:58.241293 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-16 02:24:58.241302 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-16 02:24:58.241312 | orchestrator | + all_tags = (known after apply) 2026-02-16 02:24:58.241328 | orchestrator | + device_id = (known after apply) 2026-02-16 02:24:58.241338 | orchestrator | + device_owner = (known after apply) 2026-02-16 02:24:58.241347 | orchestrator | + dns_assignment = (known after apply) 2026-02-16 02:24:58.241357 | orchestrator | + dns_name = (known after apply) 2026-02-16 02:24:58.241372 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.241382 | orchestrator | + mac_address = (known after apply) 2026-02-16 02:24:58.241397 | orchestrator | + network_id = (known after apply) 2026-02-16 02:24:58.241414 | orchestrator | + port_security_enabled = (known after apply) 2026-02-16 02:24:58.241431 | orchestrator | + qos_policy_id = (known after apply) 2026-02-16 02:24:58.241441 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.241450 | orchestrator | + security_group_ids = (known after apply) 2026-02-16 02:24:58.241460 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.241469 | orchestrator | 2026-02-16 02:24:58.241479 | orchestrator | + allowed_address_pairs { 2026-02-16 02:24:58.241488 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-16 02:24:58.241498 | orchestrator | } 2026-02-16 02:24:58.241507 | orchestrator | + allowed_address_pairs { 2026-02-16 02:24:58.241517 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-16 02:24:58.241526 | orchestrator | } 2026-02-16 02:24:58.241535 | orchestrator | + allowed_address_pairs { 2026-02-16 02:24:58.241545 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-16 02:24:58.241576 | orchestrator | } 2026-02-16 02:24:58.241587 | orchestrator | 2026-02-16 02:24:58.241597 | orchestrator | + binding (known after apply) 2026-02-16 02:24:58.241606 | orchestrator | 2026-02-16 02:24:58.241616 | orchestrator | + fixed_ip { 2026-02-16 02:24:58.241625 | orchestrator | + ip_address = "192.168.16.12" 2026-02-16 02:24:58.241635 | orchestrator | + subnet_id = (known after apply) 2026-02-16 02:24:58.241644 | orchestrator | } 2026-02-16 02:24:58.241654 | orchestrator | } 2026-02-16 02:24:58.241663 | orchestrator | 2026-02-16 02:24:58.241673 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-16 02:24:58.241683 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-16 02:24:58.241692 | orchestrator | + admin_state_up = (known after apply) 2026-02-16 02:24:58.241702 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-16 02:24:58.241711 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-16 02:24:58.241721 | orchestrator | + all_tags = (known after apply) 2026-02-16 02:24:58.241730 | orchestrator | + device_id = (known after apply) 2026-02-16 02:24:58.241740 | orchestrator | + device_owner = (known after apply) 2026-02-16 02:24:58.241749 | orchestrator | + dns_assignment = (known after apply) 2026-02-16 02:24:58.241759 | orchestrator | + dns_name = (known after apply) 2026-02-16 02:24:58.241768 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.241778 | orchestrator | + mac_address = (known after apply) 2026-02-16 02:24:58.241787 | orchestrator | + network_id = (known after apply) 2026-02-16 02:24:58.241797 | orchestrator | + port_security_enabled = (known after apply) 2026-02-16 02:24:58.241806 | orchestrator | + qos_policy_id = (known after apply) 2026-02-16 02:24:58.241816 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.241825 | orchestrator | + security_group_ids = (known after apply) 2026-02-16 02:24:58.241835 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.241844 | orchestrator | 2026-02-16 02:24:58.241854 | orchestrator | + allowed_address_pairs { 2026-02-16 02:24:58.241864 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-16 02:24:58.241873 | orchestrator | } 2026-02-16 02:24:58.241883 | orchestrator | + allowed_address_pairs { 2026-02-16 02:24:58.241892 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-16 02:24:58.241902 | orchestrator | } 2026-02-16 02:24:58.241912 | orchestrator | + allowed_address_pairs { 2026-02-16 02:24:58.241921 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-16 02:24:58.241930 | orchestrator | } 2026-02-16 02:24:58.241940 | orchestrator | 2026-02-16 02:24:58.241960 | orchestrator | + binding (known after apply) 2026-02-16 02:24:58.241970 | orchestrator | 2026-02-16 02:24:58.241979 | orchestrator | + fixed_ip { 2026-02-16 02:24:58.241989 | orchestrator | + ip_address = "192.168.16.13" 2026-02-16 02:24:58.241999 | orchestrator | + subnet_id = (known after apply) 2026-02-16 02:24:58.242008 | orchestrator | } 2026-02-16 02:24:58.242042 | orchestrator | } 2026-02-16 02:24:58.242054 | orchestrator | 2026-02-16 02:24:58.242064 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-16 02:24:58.242074 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-16 02:24:58.242083 | orchestrator | + admin_state_up = (known after apply) 2026-02-16 02:24:58.242093 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-16 02:24:58.242110 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-16 02:24:58.242120 | orchestrator | + all_tags = (known after apply) 2026-02-16 02:24:58.242129 | orchestrator | + device_id = (known after apply) 2026-02-16 02:24:58.242139 | orchestrator | + device_owner = (known after apply) 2026-02-16 02:24:58.242148 | orchestrator | + dns_assignment = (known after apply) 2026-02-16 02:24:58.242158 | orchestrator | + dns_name = (known after apply) 2026-02-16 02:24:58.242167 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.242177 | orchestrator | + mac_address = (known after apply) 2026-02-16 02:24:58.242186 | orchestrator | + network_id = (known after apply) 2026-02-16 02:24:58.242196 | orchestrator | + port_security_enabled = (known after apply) 2026-02-16 02:24:58.242205 | orchestrator | + qos_policy_id = (known after apply) 2026-02-16 02:24:58.242215 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.242225 | orchestrator | + security_group_ids = (known after apply) 2026-02-16 02:24:58.242234 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.242246 | orchestrator | 2026-02-16 02:24:58.242263 | orchestrator | + allowed_address_pairs { 2026-02-16 02:24:58.242280 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-16 02:24:58.242293 | orchestrator | } 2026-02-16 02:24:58.242303 | orchestrator | + allowed_address_pairs { 2026-02-16 02:24:58.242312 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-16 02:24:58.242322 | orchestrator | } 2026-02-16 02:24:58.242331 | orchestrator | + allowed_address_pairs { 2026-02-16 02:24:58.242341 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-16 02:24:58.242350 | orchestrator | } 2026-02-16 02:24:58.242360 | orchestrator | 2026-02-16 02:24:58.242369 | orchestrator | + binding (known after apply) 2026-02-16 02:24:58.242378 | orchestrator | 2026-02-16 02:24:58.242388 | orchestrator | + fixed_ip { 2026-02-16 02:24:58.242398 | orchestrator | + ip_address = "192.168.16.14" 2026-02-16 02:24:58.242407 | orchestrator | + subnet_id = (known after apply) 2026-02-16 02:24:58.242416 | orchestrator | } 2026-02-16 02:24:58.242426 | orchestrator | } 2026-02-16 02:24:58.242435 | orchestrator | 2026-02-16 02:24:58.242445 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-16 02:24:58.242454 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-16 02:24:58.242464 | orchestrator | + admin_state_up = (known after apply) 2026-02-16 02:24:58.242473 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-16 02:24:58.242483 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-16 02:24:58.242492 | orchestrator | + all_tags = (known after apply) 2026-02-16 02:24:58.242503 | orchestrator | + device_id = (known after apply) 2026-02-16 02:24:58.242521 | orchestrator | + device_owner = (known after apply) 2026-02-16 02:24:58.242537 | orchestrator | + dns_assignment = (known after apply) 2026-02-16 02:24:58.242547 | orchestrator | + dns_name = (known after apply) 2026-02-16 02:24:58.242578 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.242588 | orchestrator | + mac_address = (known after apply) 2026-02-16 02:24:58.242598 | orchestrator | + network_id = (known after apply) 2026-02-16 02:24:58.242607 | orchestrator | + port_security_enabled = (known after apply) 2026-02-16 02:24:58.242617 | orchestrator | + qos_policy_id = (known after apply) 2026-02-16 02:24:58.242633 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.242643 | orchestrator | + security_group_ids = (known after apply) 2026-02-16 02:24:58.242652 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.242662 | orchestrator | 2026-02-16 02:24:58.242671 | orchestrator | + allowed_address_pairs { 2026-02-16 02:24:58.242681 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-16 02:24:58.242690 | orchestrator | } 2026-02-16 02:24:58.242700 | orchestrator | + allowed_address_pairs { 2026-02-16 02:24:58.242709 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-16 02:24:58.242719 | orchestrator | } 2026-02-16 02:24:58.242728 | orchestrator | + allowed_address_pairs { 2026-02-16 02:24:58.242738 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-16 02:24:58.242747 | orchestrator | } 2026-02-16 02:24:58.242757 | orchestrator | 2026-02-16 02:24:58.242773 | orchestrator | + binding (known after apply) 2026-02-16 02:24:58.242782 | orchestrator | 2026-02-16 02:24:58.242792 | orchestrator | + fixed_ip { 2026-02-16 02:24:58.242802 | orchestrator | + ip_address = "192.168.16.15" 2026-02-16 02:24:58.242811 | orchestrator | + subnet_id = (known after apply) 2026-02-16 02:24:58.242820 | orchestrator | } 2026-02-16 02:24:58.242830 | orchestrator | } 2026-02-16 02:24:58.242839 | orchestrator | 2026-02-16 02:24:58.242848 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-16 02:24:58.242858 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-16 02:24:58.242868 | orchestrator | + force_destroy = false 2026-02-16 02:24:58.242877 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.242887 | orchestrator | + port_id = (known after apply) 2026-02-16 02:24:58.242896 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.242919 | orchestrator | + router_id = (known after apply) 2026-02-16 02:24:58.242929 | orchestrator | + subnet_id = (known after apply) 2026-02-16 02:24:58.242938 | orchestrator | } 2026-02-16 02:24:58.242959 | orchestrator | 2026-02-16 02:24:58.242968 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-16 02:24:58.242978 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-16 02:24:58.242987 | orchestrator | + admin_state_up = (known after apply) 2026-02-16 02:24:58.242997 | orchestrator | + all_tags = (known after apply) 2026-02-16 02:24:58.243006 | orchestrator | + availability_zone_hints = [ 2026-02-16 02:24:58.243016 | orchestrator | + "nova", 2026-02-16 02:24:58.243025 | orchestrator | ] 2026-02-16 02:24:58.243035 | orchestrator | + distributed = (known after apply) 2026-02-16 02:24:58.243044 | orchestrator | + enable_snat = (known after apply) 2026-02-16 02:24:58.243054 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-16 02:24:58.243063 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-16 02:24:58.243073 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.243083 | orchestrator | + name = "testbed" 2026-02-16 02:24:58.243092 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.243102 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.243112 | orchestrator | 2026-02-16 02:24:58.243121 | orchestrator | + external_fixed_ip (known after apply) 2026-02-16 02:24:58.243131 | orchestrator | } 2026-02-16 02:24:58.243140 | orchestrator | 2026-02-16 02:24:58.243150 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-16 02:24:58.243160 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-16 02:24:58.243169 | orchestrator | + description = "ssh" 2026-02-16 02:24:58.243179 | orchestrator | + direction = "ingress" 2026-02-16 02:24:58.243188 | orchestrator | + ethertype = "IPv4" 2026-02-16 02:24:58.243205 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.243215 | orchestrator | + port_range_max = 22 2026-02-16 02:24:58.243224 | orchestrator | + port_range_min = 22 2026-02-16 02:24:58.243233 | orchestrator | + protocol = "tcp" 2026-02-16 02:24:58.243243 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.243259 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-16 02:24:58.243268 | orchestrator | + remote_group_id = (known after apply) 2026-02-16 02:24:58.243278 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-16 02:24:58.243287 | orchestrator | + security_group_id = (known after apply) 2026-02-16 02:24:58.243297 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.243306 | orchestrator | } 2026-02-16 02:24:58.243316 | orchestrator | 2026-02-16 02:24:58.243325 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-16 02:24:58.243335 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-16 02:24:58.243344 | orchestrator | + description = "wireguard" 2026-02-16 02:24:58.243354 | orchestrator | + direction = "ingress" 2026-02-16 02:24:58.243363 | orchestrator | + ethertype = "IPv4" 2026-02-16 02:24:58.243372 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.243382 | orchestrator | + port_range_max = 51820 2026-02-16 02:24:58.243391 | orchestrator | + port_range_min = 51820 2026-02-16 02:24:58.243401 | orchestrator | + protocol = "udp" 2026-02-16 02:24:58.243410 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.243419 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-16 02:24:58.243429 | orchestrator | + remote_group_id = (known after apply) 2026-02-16 02:24:58.243438 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-16 02:24:58.243448 | orchestrator | + security_group_id = (known after apply) 2026-02-16 02:24:58.243457 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.243467 | orchestrator | } 2026-02-16 02:24:58.243476 | orchestrator | 2026-02-16 02:24:58.243486 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-16 02:24:58.243496 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-16 02:24:58.243505 | orchestrator | + direction = "ingress" 2026-02-16 02:24:58.243515 | orchestrator | + ethertype = "IPv4" 2026-02-16 02:24:58.243524 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.243533 | orchestrator | + protocol = "tcp" 2026-02-16 02:24:58.243543 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.243583 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-16 02:24:58.243603 | orchestrator | + remote_group_id = (known after apply) 2026-02-16 02:24:58.243618 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-16 02:24:58.243633 | orchestrator | + security_group_id = (known after apply) 2026-02-16 02:24:58.243649 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.243658 | orchestrator | } 2026-02-16 02:24:58.243668 | orchestrator | 2026-02-16 02:24:58.243677 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-16 02:24:58.243687 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-16 02:24:58.243696 | orchestrator | + direction = "ingress" 2026-02-16 02:24:58.243706 | orchestrator | + ethertype = "IPv4" 2026-02-16 02:24:58.243715 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.243725 | orchestrator | + protocol = "udp" 2026-02-16 02:24:58.243734 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.243744 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-16 02:24:58.243753 | orchestrator | + remote_group_id = (known after apply) 2026-02-16 02:24:58.243762 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-16 02:24:58.243772 | orchestrator | + security_group_id = (known after apply) 2026-02-16 02:24:58.243781 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.243791 | orchestrator | } 2026-02-16 02:24:58.243800 | orchestrator | 2026-02-16 02:24:58.243810 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-16 02:24:58.243826 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-16 02:24:58.243835 | orchestrator | + direction = "ingress" 2026-02-16 02:24:58.243845 | orchestrator | + ethertype = "IPv4" 2026-02-16 02:24:58.243854 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.243864 | orchestrator | + protocol = "icmp" 2026-02-16 02:24:58.243873 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.243882 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-16 02:24:58.243892 | orchestrator | + remote_group_id = (known after apply) 2026-02-16 02:24:58.243901 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-16 02:24:58.243911 | orchestrator | + security_group_id = (known after apply) 2026-02-16 02:24:58.243920 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.243929 | orchestrator | } 2026-02-16 02:24:58.243939 | orchestrator | 2026-02-16 02:24:58.243948 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-16 02:24:58.243958 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-16 02:24:58.243967 | orchestrator | + direction = "ingress" 2026-02-16 02:24:58.243977 | orchestrator | + ethertype = "IPv4" 2026-02-16 02:24:58.243986 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.243996 | orchestrator | + protocol = "tcp" 2026-02-16 02:24:58.244005 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.244015 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-16 02:24:58.244031 | orchestrator | + remote_group_id = (known after apply) 2026-02-16 02:24:58.244040 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-16 02:24:58.244050 | orchestrator | + security_group_id = (known after apply) 2026-02-16 02:24:58.244059 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.244069 | orchestrator | } 2026-02-16 02:24:58.244078 | orchestrator | 2026-02-16 02:24:58.244088 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-16 02:24:58.244104 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-16 02:24:58.244114 | orchestrator | + direction = "ingress" 2026-02-16 02:24:58.244123 | orchestrator | + ethertype = "IPv4" 2026-02-16 02:24:58.244133 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.244142 | orchestrator | + protocol = "udp" 2026-02-16 02:24:58.244151 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.244161 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-16 02:24:58.244170 | orchestrator | + remote_group_id = (known after apply) 2026-02-16 02:24:58.244180 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-16 02:24:58.244189 | orchestrator | + security_group_id = (known after apply) 2026-02-16 02:24:58.244199 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.244208 | orchestrator | } 2026-02-16 02:24:58.244218 | orchestrator | 2026-02-16 02:24:58.244227 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-16 02:24:58.244237 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-16 02:24:58.244246 | orchestrator | + direction = "ingress" 2026-02-16 02:24:58.244260 | orchestrator | + ethertype = "IPv4" 2026-02-16 02:24:58.244270 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.244280 | orchestrator | + protocol = "icmp" 2026-02-16 02:24:58.244289 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.244298 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-16 02:24:58.244308 | orchestrator | + remote_group_id = (known after apply) 2026-02-16 02:24:58.244317 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-16 02:24:58.244326 | orchestrator | + security_group_id = (known after apply) 2026-02-16 02:24:58.244336 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.244351 | orchestrator | } 2026-02-16 02:24:58.244361 | orchestrator | 2026-02-16 02:24:58.244370 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-16 02:24:58.244380 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-16 02:24:58.244389 | orchestrator | + description = "vrrp" 2026-02-16 02:24:58.244399 | orchestrator | + direction = "ingress" 2026-02-16 02:24:58.244408 | orchestrator | + ethertype = "IPv4" 2026-02-16 02:24:58.244417 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.244427 | orchestrator | + protocol = "112" 2026-02-16 02:24:58.244436 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.244446 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-16 02:24:58.244455 | orchestrator | + remote_group_id = (known after apply) 2026-02-16 02:24:58.244464 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-16 02:24:58.244474 | orchestrator | + security_group_id = (known after apply) 2026-02-16 02:24:58.244483 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.244493 | orchestrator | } 2026-02-16 02:24:58.244502 | orchestrator | 2026-02-16 02:24:58.244512 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-16 02:24:58.244521 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-16 02:24:58.244531 | orchestrator | + all_tags = (known after apply) 2026-02-16 02:24:58.244540 | orchestrator | + description = "management security group" 2026-02-16 02:24:58.244550 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.244583 | orchestrator | + name = "testbed-management" 2026-02-16 02:24:58.244593 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.244603 | orchestrator | + stateful = (known after apply) 2026-02-16 02:24:58.244612 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.244621 | orchestrator | } 2026-02-16 02:24:58.244631 | orchestrator | 2026-02-16 02:24:58.244640 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-16 02:24:58.244650 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-16 02:24:58.244659 | orchestrator | + all_tags = (known after apply) 2026-02-16 02:24:58.244669 | orchestrator | + description = "node security group" 2026-02-16 02:24:58.244678 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.244687 | orchestrator | + name = "testbed-node" 2026-02-16 02:24:58.244697 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.244706 | orchestrator | + stateful = (known after apply) 2026-02-16 02:24:58.244715 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.244725 | orchestrator | } 2026-02-16 02:24:58.244734 | orchestrator | 2026-02-16 02:24:58.244743 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-16 02:24:58.244753 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-16 02:24:58.244762 | orchestrator | + all_tags = (known after apply) 2026-02-16 02:24:58.244772 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-16 02:24:58.244781 | orchestrator | + dns_nameservers = [ 2026-02-16 02:24:58.244791 | orchestrator | + "8.8.8.8", 2026-02-16 02:24:58.244800 | orchestrator | + "9.9.9.9", 2026-02-16 02:24:58.244810 | orchestrator | ] 2026-02-16 02:24:58.244819 | orchestrator | + enable_dhcp = true 2026-02-16 02:24:58.244829 | orchestrator | + gateway_ip = (known after apply) 2026-02-16 02:24:58.244838 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.244848 | orchestrator | + ip_version = 4 2026-02-16 02:24:58.244857 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-16 02:24:58.244867 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-16 02:24:58.244876 | orchestrator | + name = "subnet-testbed-management" 2026-02-16 02:24:58.244885 | orchestrator | + network_id = (known after apply) 2026-02-16 02:24:58.244895 | orchestrator | + no_gateway = false 2026-02-16 02:24:58.244904 | orchestrator | + region = (known after apply) 2026-02-16 02:24:58.244913 | orchestrator | + service_types = (known after apply) 2026-02-16 02:24:58.244929 | orchestrator | + tenant_id = (known after apply) 2026-02-16 02:24:58.244939 | orchestrator | 2026-02-16 02:24:58.244948 | orchestrator | + allocation_pool { 2026-02-16 02:24:58.244957 | orchestrator | + end = "192.168.31.250" 2026-02-16 02:24:58.244967 | orchestrator | + start = "192.168.31.200" 2026-02-16 02:24:58.244976 | orchestrator | } 2026-02-16 02:24:58.244986 | orchestrator | } 2026-02-16 02:24:58.244996 | orchestrator | 2026-02-16 02:24:58.245005 | orchestrator | # terraform_data.image will be created 2026-02-16 02:24:58.245014 | orchestrator | + resource "terraform_data" "image" { 2026-02-16 02:24:58.245024 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.245033 | orchestrator | + input = "Ubuntu 24.04" 2026-02-16 02:24:58.245043 | orchestrator | + output = (known after apply) 2026-02-16 02:24:58.245052 | orchestrator | } 2026-02-16 02:24:58.245062 | orchestrator | 2026-02-16 02:24:58.245071 | orchestrator | # terraform_data.image_node will be created 2026-02-16 02:24:58.245085 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-16 02:24:58.245095 | orchestrator | + id = (known after apply) 2026-02-16 02:24:58.245105 | orchestrator | + input = "Ubuntu 24.04" 2026-02-16 02:24:58.245114 | orchestrator | + output = (known after apply) 2026-02-16 02:24:58.245124 | orchestrator | } 2026-02-16 02:24:58.245133 | orchestrator | 2026-02-16 02:24:58.245142 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-16 02:24:58.245152 | orchestrator | 2026-02-16 02:24:58.245161 | orchestrator | Changes to Outputs: 2026-02-16 02:24:58.245171 | orchestrator | + manager_address = (sensitive value) 2026-02-16 02:24:58.245180 | orchestrator | + private_key = (sensitive value) 2026-02-16 02:24:58.338286 | orchestrator | terraform_data.image_node: Creating... 2026-02-16 02:24:58.338347 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=1a502002-8fc6-158d-a8de-9a501c22b2ec] 2026-02-16 02:24:58.481182 | orchestrator | terraform_data.image: Creating... 2026-02-16 02:24:58.481778 | orchestrator | terraform_data.image: Creation complete after 0s [id=02f7e0b9-3a30-4d87-4236-42aa25dc725a] 2026-02-16 02:24:58.499525 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-16 02:24:58.500503 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-16 02:24:58.504579 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-16 02:24:58.505911 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-16 02:24:58.506124 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-16 02:24:58.507258 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-16 02:24:58.507507 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-16 02:24:58.508668 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-16 02:24:58.512347 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-16 02:24:58.512534 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-16 02:24:58.944307 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-16 02:24:58.957270 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-16 02:24:58.962910 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-16 02:24:58.966784 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-16 02:24:59.012623 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-02-16 02:24:59.019380 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-16 02:25:00.117240 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=0bd72d57-ce33-4101-a1f0-24efef08dd16] 2026-02-16 02:25:00.125935 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-16 02:25:02.160807 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=57ea9400-2602-4802-b9b7-802a488f4705] 2026-02-16 02:25:02.164442 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-16 02:25:02.178140 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=0693774e-e893-4e7b-949f-071f2326db51] 2026-02-16 02:25:02.179635 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=864a7dfe-a330-4dca-9b66-49ca9e8841e5] 2026-02-16 02:25:02.186811 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-16 02:25:02.188629 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=51f5f49d-415a-48de-982e-531dff143e5e] 2026-02-16 02:25:02.192172 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-16 02:25:02.195182 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-16 02:25:02.206006 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=22f5929b-f2f1-4a02-b80c-ace7dc1afd6d] 2026-02-16 02:25:02.206418 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=843bc551-f5ad-4319-82ad-d411f9295fd2] 2026-02-16 02:25:02.215353 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-16 02:25:02.217341 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-16 02:25:02.228778 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=0857a7ec-98e0-4b3a-95cc-40567a4f4a8e] 2026-02-16 02:25:02.239732 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-16 02:25:02.243248 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=b3925fb6a00531d0393eb3e461c39fb46c40ab76] 2026-02-16 02:25:02.249398 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-16 02:25:02.252113 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=560fea90-96fc-4e98-a264-4fc86723b569] 2026-02-16 02:25:02.254433 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=831317a9b3bb1587df3ade1b797e82063263cde1] 2026-02-16 02:25:02.256420 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-16 02:25:02.298207 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=769208b9-3be0-45fa-bf10-39ffe30cf829] 2026-02-16 02:25:03.187929 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=7b3b8f61-8cc4-4cfa-b105-008b266ba379] 2026-02-16 02:25:03.196218 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-16 02:25:03.508375 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=f62a15e9-7171-41ff-abc5-7047e911ab8f] 2026-02-16 02:25:05.623773 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=c7144733-ae74-44fe-b24d-98a6f80ad4d8] 2026-02-16 02:25:05.650198 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=2168da4d-1d17-4015-90e6-e36c44513ae5] 2026-02-16 02:25:05.664370 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=d4296cc6-718f-4cad-a4ad-740e974bf2cd] 2026-02-16 02:25:05.671545 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=66717551-de8b-4214-b5a8-5208e0aa8d29] 2026-02-16 02:25:05.689677 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=2335e156-0c07-4cf9-917c-1a2f25b2fc27] 2026-02-16 02:25:05.710981 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=f566252a-854e-4a1e-9644-f4618e7e3b5d] 2026-02-16 02:25:06.103691 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=75fd4707-7e51-402e-b4c4-bbc1ba641e43] 2026-02-16 02:25:06.109297 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-16 02:25:06.109445 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-16 02:25:06.109810 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-16 02:25:06.300876 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=4e50b3e1-c897-4739-995b-63f855ef03e3] 2026-02-16 02:25:06.322940 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-16 02:25:06.323010 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-16 02:25:06.323017 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-16 02:25:06.323038 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=e9c7ae26-c161-4422-b024-d6d3948219f2] 2026-02-16 02:25:06.323044 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-16 02:25:06.334623 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-16 02:25:06.334761 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-16 02:25:06.340878 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-16 02:25:06.341411 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-16 02:25:06.347634 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-16 02:25:06.497525 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=45bc987f-94e3-498a-aca7-351c6f7f6f2e] 2026-02-16 02:25:06.506377 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-16 02:25:06.690319 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=ddcfe503-b254-4429-ab8c-c68d48268f74] 2026-02-16 02:25:06.696395 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-16 02:25:06.857544 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=469d26d9-3e19-4b7f-accd-0690c5938949] 2026-02-16 02:25:06.863474 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-16 02:25:06.901864 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=432e0b83-649c-42b4-8a3f-02d34c57ba3a] 2026-02-16 02:25:06.909325 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-16 02:25:06.965258 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=ff4b3066-d4c5-4af8-a0cd-97adb75e3466] 2026-02-16 02:25:06.973787 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-16 02:25:07.009781 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=8b1f0b10-a0d8-44d7-a7e6-f9fdda30d427] 2026-02-16 02:25:07.014875 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=5cfb8811-bb64-4a48-a9b6-5f2596acbc86] 2026-02-16 02:25:07.015958 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-16 02:25:07.023401 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-16 02:25:07.106436 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=71dd5d02-0e49-4fe7-b353-fee9b10a4c40] 2026-02-16 02:25:07.154930 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=c7ab9dd6-e8fd-467f-bf2d-8d97b6e783e6] 2026-02-16 02:25:07.160758 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=3215954a-0867-4e25-b916-f7d7c86dd4b6] 2026-02-16 02:25:07.262548 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=f6f330db-e25d-4557-b3ce-b3988926d7b2] 2026-02-16 02:25:07.307219 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=66e3f012-2a6f-4011-af65-e03a627aa5a9] 2026-02-16 02:25:07.350093 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 0s [id=0ebb2cee-d845-44aa-9027-a0960b1078f3] 2026-02-16 02:25:07.406002 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=d7e6e1a6-2255-441d-b034-54da3e3b7a74] 2026-02-16 02:25:07.672978 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=114e34a7-aaee-4652-a164-60b7364d5b2e] 2026-02-16 02:25:07.821523 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=ff5b1ff1-076e-45f4-b7fb-bb09a689414c] 2026-02-16 02:25:08.764686 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=fe0f3a77-6cd0-4aef-84bd-eb19a7a1f6fa] 2026-02-16 02:25:08.788875 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-16 02:25:08.826085 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-16 02:25:08.826135 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-16 02:25:08.846085 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-16 02:25:08.862124 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-16 02:25:08.862513 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-16 02:25:08.890669 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-16 02:25:10.277276 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=9c4e2fbb-d45b-4d4c-b497-fd55c7a2cedc] 2026-02-16 02:25:10.288792 | orchestrator | local_file.inventory: Creating... 2026-02-16 02:25:10.293531 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-16 02:25:10.293611 | orchestrator | local_file.inventory: Creation complete after 0s [id=9d72629d4112ca9f1ef4d32b9599eab7c195ba00] 2026-02-16 02:25:10.293622 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-16 02:25:10.296867 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=f1200c2153aadd089b4b8c815d4d038f43c0b19b] 2026-02-16 02:25:11.045184 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=9c4e2fbb-d45b-4d4c-b497-fd55c7a2cedc] 2026-02-16 02:25:18.827132 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-16 02:25:18.827247 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-16 02:25:18.869477 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-16 02:25:18.874672 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-16 02:25:18.874718 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-16 02:25:18.881900 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-16 02:25:28.836598 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-16 02:25:28.836730 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-16 02:25:28.870293 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-16 02:25:28.875763 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-16 02:25:28.876060 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-16 02:25:28.883139 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-16 02:25:29.387991 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=8cef7265-909a-4dee-a282-67362f0731d9] 2026-02-16 02:25:29.399343 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=06ddeea9-2544-4be5-81a5-063624890c2d] 2026-02-16 02:25:29.406091 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=bed11325-e576-4c3e-8745-c672c24f7135] 2026-02-16 02:25:38.837904 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-02-16 02:25:38.876217 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-02-16 02:25:38.883458 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-02-16 02:25:39.889650 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=54044496-2e5b-402c-af35-961f7ec0bf7c] 2026-02-16 02:25:39.894598 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=c97afcaf-f077-4e1c-b13c-0466a170b060] 2026-02-16 02:25:40.021056 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=54becf2c-0b8c-4de4-8f73-92b6becc4f25] 2026-02-16 02:25:40.038988 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-16 02:25:40.047755 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-16 02:25:40.049792 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-16 02:25:40.055750 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-16 02:25:40.061992 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-16 02:25:40.063366 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-16 02:25:40.066525 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=5741033489953308657] 2026-02-16 02:25:40.067227 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-16 02:25:40.068931 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-16 02:25:40.069132 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-16 02:25:40.093804 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-16 02:25:40.095055 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-16 02:25:43.440344 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=54becf2c-0b8c-4de4-8f73-92b6becc4f25/22f5929b-f2f1-4a02-b80c-ace7dc1afd6d] 2026-02-16 02:25:43.469536 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=8cef7265-909a-4dee-a282-67362f0731d9/843bc551-f5ad-4319-82ad-d411f9295fd2] 2026-02-16 02:25:43.504355 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=06ddeea9-2544-4be5-81a5-063624890c2d/57ea9400-2602-4802-b9b7-802a488f4705] 2026-02-16 02:25:43.521490 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=54becf2c-0b8c-4de4-8f73-92b6becc4f25/560fea90-96fc-4e98-a264-4fc86723b569] 2026-02-16 02:25:43.536075 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=8cef7265-909a-4dee-a282-67362f0731d9/51f5f49d-415a-48de-982e-531dff143e5e] 2026-02-16 02:25:43.548833 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=06ddeea9-2544-4be5-81a5-063624890c2d/0857a7ec-98e0-4b3a-95cc-40567a4f4a8e] 2026-02-16 02:25:49.615678 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=8cef7265-909a-4dee-a282-67362f0731d9/0693774e-e893-4e7b-949f-071f2326db51] 2026-02-16 02:25:49.616897 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=54becf2c-0b8c-4de4-8f73-92b6becc4f25/864a7dfe-a330-4dca-9b66-49ca9e8841e5] 2026-02-16 02:25:49.649828 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=06ddeea9-2544-4be5-81a5-063624890c2d/769208b9-3be0-45fa-bf10-39ffe30cf829] 2026-02-16 02:25:50.095664 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-16 02:26:00.095973 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-16 02:26:00.385276 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=44257cf8-1a32-4271-a2da-613902dae7ed] 2026-02-16 02:26:00.400933 | orchestrator | 2026-02-16 02:26:00.401065 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-16 02:26:00.401086 | orchestrator | 2026-02-16 02:26:00.401100 | orchestrator | Outputs: 2026-02-16 02:26:00.401112 | orchestrator | 2026-02-16 02:26:00.401124 | orchestrator | manager_address = 2026-02-16 02:26:00.401136 | orchestrator | private_key = 2026-02-16 02:26:00.604178 | orchestrator | ok: Runtime: 0:01:09.436761 2026-02-16 02:26:00.636895 | 2026-02-16 02:26:00.637024 | TASK [Fetch manager address] 2026-02-16 02:26:01.123600 | orchestrator | ok 2026-02-16 02:26:01.133938 | 2026-02-16 02:26:01.134067 | TASK [Set manager_host address] 2026-02-16 02:26:01.215592 | orchestrator | ok 2026-02-16 02:26:01.225184 | 2026-02-16 02:26:01.225308 | LOOP [Update ansible collections] 2026-02-16 02:26:02.574413 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-16 02:26:02.574751 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-16 02:26:02.574807 | orchestrator | Starting galaxy collection install process 2026-02-16 02:26:02.574973 | orchestrator | Process install dependency map 2026-02-16 02:26:02.575013 | orchestrator | Starting collection install process 2026-02-16 02:26:02.575045 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-02-16 02:26:02.575079 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-02-16 02:26:02.575116 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-16 02:26:02.575206 | orchestrator | ok: Item: commons Runtime: 0:00:00.992083 2026-02-16 02:26:03.554598 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-16 02:26:03.554772 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-16 02:26:03.554825 | orchestrator | Starting galaxy collection install process 2026-02-16 02:26:03.554893 | orchestrator | Process install dependency map 2026-02-16 02:26:03.554932 | orchestrator | Starting collection install process 2026-02-16 02:26:03.554966 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-02-16 02:26:03.555000 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-02-16 02:26:03.555034 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-16 02:26:03.555086 | orchestrator | ok: Item: services Runtime: 0:00:00.628209 2026-02-16 02:26:03.571073 | 2026-02-16 02:26:03.571223 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-16 02:26:14.164107 | orchestrator | ok 2026-02-16 02:26:14.173139 | 2026-02-16 02:26:14.173274 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-16 02:27:14.209254 | orchestrator | ok 2026-02-16 02:27:14.220314 | 2026-02-16 02:27:14.220456 | TASK [Fetch manager ssh hostkey] 2026-02-16 02:27:15.793285 | orchestrator | Output suppressed because no_log was given 2026-02-16 02:27:15.809637 | 2026-02-16 02:27:15.809827 | TASK [Get ssh keypair from terraform environment] 2026-02-16 02:27:16.351645 | orchestrator | ok: Runtime: 0:00:00.008966 2026-02-16 02:27:16.367372 | 2026-02-16 02:27:16.367548 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-16 02:27:16.416047 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-16 02:27:16.426415 | 2026-02-16 02:27:16.426567 | TASK [Run manager part 0] 2026-02-16 02:27:17.255383 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-16 02:27:17.306946 | orchestrator | 2026-02-16 02:27:17.306985 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-16 02:27:17.306992 | orchestrator | 2026-02-16 02:27:17.307002 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-16 02:27:18.974947 | orchestrator | ok: [testbed-manager] 2026-02-16 02:27:18.975005 | orchestrator | 2026-02-16 02:27:18.975028 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-16 02:27:18.975038 | orchestrator | 2026-02-16 02:27:18.975048 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-16 02:27:20.889820 | orchestrator | ok: [testbed-manager] 2026-02-16 02:27:20.889858 | orchestrator | 2026-02-16 02:27:20.889867 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-16 02:27:21.536851 | orchestrator | ok: [testbed-manager] 2026-02-16 02:27:21.536943 | orchestrator | 2026-02-16 02:27:21.536964 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-16 02:27:21.596093 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:27:21.596159 | orchestrator | 2026-02-16 02:27:21.596174 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-16 02:27:21.631716 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:27:21.631814 | orchestrator | 2026-02-16 02:27:21.631835 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-16 02:27:21.671246 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:27:21.900609 | orchestrator | 2026-02-16 02:27:21.900652 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-16 02:27:21.900683 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:27:21.900689 | orchestrator | 2026-02-16 02:27:21.900693 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-16 02:27:21.900697 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:27:21.900701 | orchestrator | 2026-02-16 02:27:21.900705 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-16 02:27:21.900710 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:27:21.900714 | orchestrator | 2026-02-16 02:27:21.900720 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-16 02:27:21.900724 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:27:21.900728 | orchestrator | 2026-02-16 02:27:21.900732 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-16 02:27:22.538058 | orchestrator | changed: [testbed-manager] 2026-02-16 02:27:22.538153 | orchestrator | 2026-02-16 02:27:22.538174 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-16 02:29:55.054735 | orchestrator | changed: [testbed-manager] 2026-02-16 02:29:55.054910 | orchestrator | 2026-02-16 02:29:55.054932 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-16 02:31:15.147540 | orchestrator | changed: [testbed-manager] 2026-02-16 02:31:15.147668 | orchestrator | 2026-02-16 02:31:15.147684 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-16 02:31:33.837446 | orchestrator | changed: [testbed-manager] 2026-02-16 02:31:33.837568 | orchestrator | 2026-02-16 02:31:33.837597 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-16 02:31:42.125308 | orchestrator | changed: [testbed-manager] 2026-02-16 02:31:42.125404 | orchestrator | 2026-02-16 02:31:42.125421 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-16 02:31:42.177179 | orchestrator | ok: [testbed-manager] 2026-02-16 02:31:42.177256 | orchestrator | 2026-02-16 02:31:42.177267 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-16 02:31:42.959428 | orchestrator | ok: [testbed-manager] 2026-02-16 02:31:42.959506 | orchestrator | 2026-02-16 02:31:42.959519 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-16 02:31:43.737499 | orchestrator | changed: [testbed-manager] 2026-02-16 02:31:43.737541 | orchestrator | 2026-02-16 02:31:43.737550 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-16 02:31:49.898460 | orchestrator | changed: [testbed-manager] 2026-02-16 02:31:49.898563 | orchestrator | 2026-02-16 02:31:49.898605 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-16 02:31:55.865825 | orchestrator | changed: [testbed-manager] 2026-02-16 02:31:55.865924 | orchestrator | 2026-02-16 02:31:55.865942 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-16 02:31:58.417309 | orchestrator | changed: [testbed-manager] 2026-02-16 02:31:58.418129 | orchestrator | 2026-02-16 02:31:58.418194 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-16 02:32:00.149411 | orchestrator | changed: [testbed-manager] 2026-02-16 02:32:00.149495 | orchestrator | 2026-02-16 02:32:00.149510 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-16 02:32:01.244338 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-16 02:32:01.244407 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-16 02:32:01.244421 | orchestrator | 2026-02-16 02:32:01.244433 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-16 02:32:01.291189 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-16 02:32:01.291264 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-16 02:32:01.291277 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-16 02:32:01.291284 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-16 02:32:04.886275 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-16 02:32:04.886392 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-16 02:32:04.886412 | orchestrator | 2026-02-16 02:32:04.886425 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-16 02:32:05.464137 | orchestrator | changed: [testbed-manager] 2026-02-16 02:32:05.464232 | orchestrator | 2026-02-16 02:32:05.464248 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-16 02:34:26.561945 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-16 02:34:26.562107 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-16 02:34:26.562130 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-16 02:34:26.562143 | orchestrator | 2026-02-16 02:34:26.562156 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-16 02:34:28.851561 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-16 02:34:28.851598 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-16 02:34:28.851604 | orchestrator | 2026-02-16 02:34:28.851609 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-16 02:34:28.851614 | orchestrator | 2026-02-16 02:34:28.851618 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-16 02:34:30.229293 | orchestrator | ok: [testbed-manager] 2026-02-16 02:34:30.229330 | orchestrator | 2026-02-16 02:34:30.229339 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-16 02:34:30.276181 | orchestrator | ok: [testbed-manager] 2026-02-16 02:34:30.276223 | orchestrator | 2026-02-16 02:34:30.276231 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-16 02:34:30.341159 | orchestrator | ok: [testbed-manager] 2026-02-16 02:34:30.341202 | orchestrator | 2026-02-16 02:34:30.341211 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-16 02:34:31.095275 | orchestrator | changed: [testbed-manager] 2026-02-16 02:34:31.095318 | orchestrator | 2026-02-16 02:34:31.095327 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-16 02:34:31.802239 | orchestrator | changed: [testbed-manager] 2026-02-16 02:34:31.802281 | orchestrator | 2026-02-16 02:34:31.802291 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-16 02:34:33.096243 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-16 02:34:33.096282 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-16 02:34:33.096288 | orchestrator | 2026-02-16 02:34:33.096302 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-16 02:34:34.480510 | orchestrator | changed: [testbed-manager] 2026-02-16 02:34:34.480567 | orchestrator | 2026-02-16 02:34:34.480575 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-16 02:34:36.214547 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-16 02:34:36.214733 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-16 02:34:36.214749 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-16 02:34:36.214762 | orchestrator | 2026-02-16 02:34:36.214775 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-16 02:34:36.272366 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:34:36.272406 | orchestrator | 2026-02-16 02:34:36.272413 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-16 02:34:36.350083 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:34:36.350162 | orchestrator | 2026-02-16 02:34:36.350179 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-16 02:34:36.904584 | orchestrator | changed: [testbed-manager] 2026-02-16 02:34:36.904625 | orchestrator | 2026-02-16 02:34:36.904634 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-16 02:34:36.979892 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:34:36.979974 | orchestrator | 2026-02-16 02:34:36.979991 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-16 02:34:37.804833 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-16 02:34:37.804951 | orchestrator | changed: [testbed-manager] 2026-02-16 02:34:37.804968 | orchestrator | 2026-02-16 02:34:37.804981 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-16 02:34:37.846517 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:34:37.846604 | orchestrator | 2026-02-16 02:34:37.846620 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-16 02:34:37.886900 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:34:37.886963 | orchestrator | 2026-02-16 02:34:37.886972 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-16 02:34:37.920300 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:34:37.920367 | orchestrator | 2026-02-16 02:34:37.920380 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-16 02:34:37.985816 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:34:37.985940 | orchestrator | 2026-02-16 02:34:37.985967 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-16 02:34:38.712198 | orchestrator | ok: [testbed-manager] 2026-02-16 02:34:38.712289 | orchestrator | 2026-02-16 02:34:38.712308 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-16 02:34:38.712443 | orchestrator | 2026-02-16 02:34:38.712460 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-16 02:34:40.089018 | orchestrator | ok: [testbed-manager] 2026-02-16 02:34:40.089123 | orchestrator | 2026-02-16 02:34:40.089142 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-16 02:34:41.038590 | orchestrator | changed: [testbed-manager] 2026-02-16 02:34:41.038690 | orchestrator | 2026-02-16 02:34:41.038717 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 02:34:41.038734 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-16 02:34:41.038745 | orchestrator | 2026-02-16 02:34:41.239331 | orchestrator | ok: Runtime: 0:07:24.447905 2026-02-16 02:34:41.257498 | 2026-02-16 02:34:41.257638 | TASK [Point out that the log in on the manager is now possible] 2026-02-16 02:34:41.298716 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-16 02:34:41.306909 | 2026-02-16 02:34:41.307025 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-16 02:34:41.340638 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-16 02:34:41.349228 | 2026-02-16 02:34:41.349336 | TASK [Run manager part 1 + 2] 2026-02-16 02:34:42.223333 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-16 02:34:42.280594 | orchestrator | 2026-02-16 02:34:42.280639 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-16 02:34:42.280646 | orchestrator | 2026-02-16 02:34:42.280659 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-16 02:34:45.175645 | orchestrator | ok: [testbed-manager] 2026-02-16 02:34:45.175700 | orchestrator | 2026-02-16 02:34:45.175726 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-16 02:34:45.218728 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:34:45.218781 | orchestrator | 2026-02-16 02:34:45.218791 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-16 02:34:45.254518 | orchestrator | ok: [testbed-manager] 2026-02-16 02:34:45.254562 | orchestrator | 2026-02-16 02:34:45.254569 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-16 02:34:45.299470 | orchestrator | ok: [testbed-manager] 2026-02-16 02:34:45.299519 | orchestrator | 2026-02-16 02:34:45.299528 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-16 02:34:45.371679 | orchestrator | ok: [testbed-manager] 2026-02-16 02:34:45.371743 | orchestrator | 2026-02-16 02:34:45.371756 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-16 02:34:45.437017 | orchestrator | ok: [testbed-manager] 2026-02-16 02:34:45.437069 | orchestrator | 2026-02-16 02:34:45.437080 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-16 02:34:45.497029 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-16 02:34:45.497080 | orchestrator | 2026-02-16 02:34:45.497087 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-16 02:34:46.216155 | orchestrator | ok: [testbed-manager] 2026-02-16 02:34:46.216209 | orchestrator | 2026-02-16 02:34:46.216218 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-16 02:34:46.271175 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:34:46.271228 | orchestrator | 2026-02-16 02:34:46.271235 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-16 02:34:47.629825 | orchestrator | changed: [testbed-manager] 2026-02-16 02:34:47.629907 | orchestrator | 2026-02-16 02:34:47.629917 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-16 02:34:48.195607 | orchestrator | ok: [testbed-manager] 2026-02-16 02:34:48.195660 | orchestrator | 2026-02-16 02:34:48.195668 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-16 02:34:49.315640 | orchestrator | changed: [testbed-manager] 2026-02-16 02:34:49.315696 | orchestrator | 2026-02-16 02:34:49.315706 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-16 02:35:04.054382 | orchestrator | changed: [testbed-manager] 2026-02-16 02:35:04.054480 | orchestrator | 2026-02-16 02:35:04.054495 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-16 02:35:04.735267 | orchestrator | ok: [testbed-manager] 2026-02-16 02:35:04.735357 | orchestrator | 2026-02-16 02:35:04.735375 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-16 02:35:04.793422 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:35:04.793476 | orchestrator | 2026-02-16 02:35:04.793482 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-16 02:35:05.729319 | orchestrator | changed: [testbed-manager] 2026-02-16 02:35:05.729431 | orchestrator | 2026-02-16 02:35:05.729460 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-16 02:35:06.683729 | orchestrator | changed: [testbed-manager] 2026-02-16 02:35:06.683842 | orchestrator | 2026-02-16 02:35:06.683876 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-16 02:35:07.253780 | orchestrator | changed: [testbed-manager] 2026-02-16 02:35:07.253898 | orchestrator | 2026-02-16 02:35:07.253918 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-16 02:35:07.298551 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-16 02:35:07.298639 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-16 02:35:07.298650 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-16 02:35:07.298657 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-16 02:35:09.501731 | orchestrator | changed: [testbed-manager] 2026-02-16 02:35:09.501782 | orchestrator | 2026-02-16 02:35:09.501792 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-16 02:35:18.088899 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-16 02:35:18.089006 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-16 02:35:18.089024 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-16 02:35:18.089037 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-16 02:35:18.089058 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-16 02:35:18.089070 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-16 02:35:18.089081 | orchestrator | 2026-02-16 02:35:18.089094 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-16 02:35:19.132101 | orchestrator | changed: [testbed-manager] 2026-02-16 02:35:19.132190 | orchestrator | 2026-02-16 02:35:19.132212 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-16 02:35:19.168949 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:35:19.169026 | orchestrator | 2026-02-16 02:35:19.169038 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-16 02:35:22.227550 | orchestrator | changed: [testbed-manager] 2026-02-16 02:35:22.227591 | orchestrator | 2026-02-16 02:35:22.227599 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-16 02:35:22.272220 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:35:22.272265 | orchestrator | 2026-02-16 02:35:22.272275 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-16 02:36:55.648412 | orchestrator | changed: [testbed-manager] 2026-02-16 02:36:55.648514 | orchestrator | 2026-02-16 02:36:55.648533 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-16 02:36:56.749651 | orchestrator | ok: [testbed-manager] 2026-02-16 02:36:56.749691 | orchestrator | 2026-02-16 02:36:56.749698 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 02:36:56.749705 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-16 02:36:56.749710 | orchestrator | 2026-02-16 02:36:56.983371 | orchestrator | ok: Runtime: 0:02:15.172184 2026-02-16 02:36:56.999578 | 2026-02-16 02:36:56.999724 | TASK [Reboot manager] 2026-02-16 02:36:58.536390 | orchestrator | ok: Runtime: 0:00:00.953803 2026-02-16 02:36:58.551067 | 2026-02-16 02:36:58.551237 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-16 02:37:12.826279 | orchestrator | ok 2026-02-16 02:37:12.837590 | 2026-02-16 02:37:12.837751 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-16 02:38:12.875255 | orchestrator | ok 2026-02-16 02:38:12.883459 | 2026-02-16 02:38:12.883574 | TASK [Deploy manager + bootstrap nodes] 2026-02-16 02:38:15.331025 | orchestrator | 2026-02-16 02:38:15.331227 | orchestrator | # DEPLOY MANAGER 2026-02-16 02:38:15.331252 | orchestrator | 2026-02-16 02:38:15.331267 | orchestrator | + set -e 2026-02-16 02:38:15.331276 | orchestrator | + echo 2026-02-16 02:38:15.331286 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-16 02:38:15.331299 | orchestrator | + echo 2026-02-16 02:38:15.331334 | orchestrator | + cat /opt/manager-vars.sh 2026-02-16 02:38:15.334264 | orchestrator | export NUMBER_OF_NODES=6 2026-02-16 02:38:15.334289 | orchestrator | 2026-02-16 02:38:15.334299 | orchestrator | export CEPH_VERSION=reef 2026-02-16 02:38:15.334310 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-16 02:38:15.334320 | orchestrator | export MANAGER_VERSION=9.5.0 2026-02-16 02:38:15.334339 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-16 02:38:15.334348 | orchestrator | 2026-02-16 02:38:15.334363 | orchestrator | export ARA=false 2026-02-16 02:38:15.334372 | orchestrator | export DEPLOY_MODE=manager 2026-02-16 02:38:15.334386 | orchestrator | export TEMPEST=false 2026-02-16 02:38:15.334396 | orchestrator | export IS_ZUUL=true 2026-02-16 02:38:15.334404 | orchestrator | 2026-02-16 02:38:15.334418 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 02:38:15.334428 | orchestrator | export EXTERNAL_API=false 2026-02-16 02:38:15.334437 | orchestrator | 2026-02-16 02:38:15.334445 | orchestrator | export IMAGE_USER=ubuntu 2026-02-16 02:38:15.334456 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-16 02:38:15.334465 | orchestrator | 2026-02-16 02:38:15.334473 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-16 02:38:15.334487 | orchestrator | 2026-02-16 02:38:15.334496 | orchestrator | + echo 2026-02-16 02:38:15.334509 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-16 02:38:15.335459 | orchestrator | ++ export INTERACTIVE=false 2026-02-16 02:38:15.335613 | orchestrator | ++ INTERACTIVE=false 2026-02-16 02:38:15.335628 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-16 02:38:15.335652 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-16 02:38:15.335673 | orchestrator | + source /opt/manager-vars.sh 2026-02-16 02:38:15.335682 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-16 02:38:15.335691 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-16 02:38:15.335700 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-16 02:38:15.335709 | orchestrator | ++ CEPH_VERSION=reef 2026-02-16 02:38:15.335718 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-16 02:38:15.335729 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-16 02:38:15.335767 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-16 02:38:15.335785 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-16 02:38:15.335797 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-16 02:38:15.335829 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-16 02:38:15.335848 | orchestrator | ++ export ARA=false 2026-02-16 02:38:15.335857 | orchestrator | ++ ARA=false 2026-02-16 02:38:15.335866 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-16 02:38:15.335875 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-16 02:38:15.335888 | orchestrator | ++ export TEMPEST=false 2026-02-16 02:38:15.335897 | orchestrator | ++ TEMPEST=false 2026-02-16 02:38:15.335905 | orchestrator | ++ export IS_ZUUL=true 2026-02-16 02:38:15.335914 | orchestrator | ++ IS_ZUUL=true 2026-02-16 02:38:15.335923 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 02:38:15.335932 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 02:38:15.335941 | orchestrator | ++ export EXTERNAL_API=false 2026-02-16 02:38:15.335950 | orchestrator | ++ EXTERNAL_API=false 2026-02-16 02:38:15.335958 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-16 02:38:15.335967 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-16 02:38:15.335976 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-16 02:38:15.335985 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-16 02:38:15.335994 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-16 02:38:15.336003 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-16 02:38:15.336014 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-16 02:38:15.389730 | orchestrator | + docker version 2026-02-16 02:38:15.496179 | orchestrator | Client: Docker Engine - Community 2026-02-16 02:38:15.496298 | orchestrator | Version: 27.5.1 2026-02-16 02:38:15.496312 | orchestrator | API version: 1.47 2026-02-16 02:38:15.496320 | orchestrator | Go version: go1.22.11 2026-02-16 02:38:15.496328 | orchestrator | Git commit: 9f9e405 2026-02-16 02:38:15.496336 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-16 02:38:15.496344 | orchestrator | OS/Arch: linux/amd64 2026-02-16 02:38:15.496352 | orchestrator | Context: default 2026-02-16 02:38:15.496359 | orchestrator | 2026-02-16 02:38:15.496367 | orchestrator | Server: Docker Engine - Community 2026-02-16 02:38:15.496375 | orchestrator | Engine: 2026-02-16 02:38:15.496383 | orchestrator | Version: 27.5.1 2026-02-16 02:38:15.496391 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-16 02:38:15.496420 | orchestrator | Go version: go1.22.11 2026-02-16 02:38:15.496428 | orchestrator | Git commit: 4c9b3b0 2026-02-16 02:38:15.496436 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-16 02:38:15.496443 | orchestrator | OS/Arch: linux/amd64 2026-02-16 02:38:15.496450 | orchestrator | Experimental: false 2026-02-16 02:38:15.496458 | orchestrator | containerd: 2026-02-16 02:38:15.496477 | orchestrator | Version: v2.2.1 2026-02-16 02:38:15.496484 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-16 02:38:15.496492 | orchestrator | runc: 2026-02-16 02:38:15.496500 | orchestrator | Version: 1.3.4 2026-02-16 02:38:15.496507 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-16 02:38:15.496514 | orchestrator | docker-init: 2026-02-16 02:38:15.496521 | orchestrator | Version: 0.19.0 2026-02-16 02:38:15.496529 | orchestrator | GitCommit: de40ad0 2026-02-16 02:38:15.499683 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-16 02:38:15.508428 | orchestrator | + set -e 2026-02-16 02:38:15.508513 | orchestrator | + source /opt/manager-vars.sh 2026-02-16 02:38:15.508526 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-16 02:38:15.508537 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-16 02:38:15.508546 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-16 02:38:15.508556 | orchestrator | ++ CEPH_VERSION=reef 2026-02-16 02:38:15.508565 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-16 02:38:15.508577 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-16 02:38:15.508587 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-16 02:38:15.508597 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-16 02:38:15.508606 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-16 02:38:15.508616 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-16 02:38:15.508625 | orchestrator | ++ export ARA=false 2026-02-16 02:38:15.508635 | orchestrator | ++ ARA=false 2026-02-16 02:38:15.508645 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-16 02:38:15.508655 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-16 02:38:15.508665 | orchestrator | ++ export TEMPEST=false 2026-02-16 02:38:15.508674 | orchestrator | ++ TEMPEST=false 2026-02-16 02:38:15.508684 | orchestrator | ++ export IS_ZUUL=true 2026-02-16 02:38:15.508693 | orchestrator | ++ IS_ZUUL=true 2026-02-16 02:38:15.508702 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 02:38:15.508712 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 02:38:15.508722 | orchestrator | ++ export EXTERNAL_API=false 2026-02-16 02:38:15.508731 | orchestrator | ++ EXTERNAL_API=false 2026-02-16 02:38:15.508741 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-16 02:38:15.508750 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-16 02:38:15.508767 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-16 02:38:15.508777 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-16 02:38:15.508787 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-16 02:38:15.508796 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-16 02:38:15.508806 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-16 02:38:15.508815 | orchestrator | ++ export INTERACTIVE=false 2026-02-16 02:38:15.508825 | orchestrator | ++ INTERACTIVE=false 2026-02-16 02:38:15.508834 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-16 02:38:15.508848 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-16 02:38:15.508857 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-16 02:38:15.508867 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-02-16 02:38:15.515347 | orchestrator | + set -e 2026-02-16 02:38:15.515376 | orchestrator | + VERSION=9.5.0 2026-02-16 02:38:15.515391 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-02-16 02:38:15.522998 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-16 02:38:15.523047 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-16 02:38:15.527101 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-16 02:38:15.531969 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-16 02:38:15.540583 | orchestrator | /opt/configuration ~ 2026-02-16 02:38:15.540619 | orchestrator | + set -e 2026-02-16 02:38:15.540631 | orchestrator | + pushd /opt/configuration 2026-02-16 02:38:15.540642 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-16 02:38:15.541904 | orchestrator | + source /opt/venv/bin/activate 2026-02-16 02:38:15.542972 | orchestrator | ++ deactivate nondestructive 2026-02-16 02:38:15.542997 | orchestrator | ++ '[' -n '' ']' 2026-02-16 02:38:15.543075 | orchestrator | ++ '[' -n '' ']' 2026-02-16 02:38:15.543933 | orchestrator | ++ hash -r 2026-02-16 02:38:15.543952 | orchestrator | ++ '[' -n '' ']' 2026-02-16 02:38:15.543964 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-16 02:38:15.543977 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-16 02:38:15.543989 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-16 02:38:15.544001 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-16 02:38:15.544014 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-16 02:38:15.544026 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-16 02:38:15.544039 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-16 02:38:15.544052 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-16 02:38:15.544064 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-16 02:38:15.544074 | orchestrator | ++ export PATH 2026-02-16 02:38:15.544086 | orchestrator | ++ '[' -n '' ']' 2026-02-16 02:38:15.544097 | orchestrator | ++ '[' -z '' ']' 2026-02-16 02:38:15.544108 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-16 02:38:15.544118 | orchestrator | ++ PS1='(venv) ' 2026-02-16 02:38:15.544129 | orchestrator | ++ export PS1 2026-02-16 02:38:15.544140 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-16 02:38:15.544151 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-16 02:38:15.544162 | orchestrator | ++ hash -r 2026-02-16 02:38:15.544173 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-16 02:38:16.483667 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-16 02:38:16.484705 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-16 02:38:16.486112 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-16 02:38:16.487582 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-16 02:38:16.488703 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-16 02:38:16.498635 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-16 02:38:16.500125 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-16 02:38:16.501215 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-16 02:38:16.502634 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-16 02:38:16.531648 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-16 02:38:16.533000 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-16 02:38:16.534876 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-16 02:38:16.536244 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-16 02:38:16.540124 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-16 02:38:16.735281 | orchestrator | ++ which gilt 2026-02-16 02:38:16.739583 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-16 02:38:16.739627 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-16 02:38:16.957464 | orchestrator | osism.cfg-generics: 2026-02-16 02:38:17.102059 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-16 02:38:17.102180 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-16 02:38:17.102223 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-16 02:38:17.102534 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-16 02:38:17.665332 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-16 02:38:17.673605 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-16 02:38:17.960275 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-16 02:38:18.004652 | orchestrator | ~ 2026-02-16 02:38:18.004721 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-16 02:38:18.004732 | orchestrator | + deactivate 2026-02-16 02:38:18.004740 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-16 02:38:18.004749 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-16 02:38:18.004755 | orchestrator | + export PATH 2026-02-16 02:38:18.004759 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-16 02:38:18.004763 | orchestrator | + '[' -n '' ']' 2026-02-16 02:38:18.004769 | orchestrator | + hash -r 2026-02-16 02:38:18.004773 | orchestrator | + '[' -n '' ']' 2026-02-16 02:38:18.004777 | orchestrator | + unset VIRTUAL_ENV 2026-02-16 02:38:18.004781 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-16 02:38:18.004785 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-16 02:38:18.004789 | orchestrator | + unset -f deactivate 2026-02-16 02:38:18.004792 | orchestrator | + popd 2026-02-16 02:38:18.006273 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-16 02:38:18.006298 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-16 02:38:18.006690 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-16 02:38:18.058677 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-16 02:38:18.058775 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-16 02:38:18.058801 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-16 02:38:18.112518 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-16 02:38:18.112658 | orchestrator | ++ semver 2024.2 2025.1 2026-02-16 02:38:18.167870 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-16 02:38:18.167972 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-16 02:38:18.255718 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-16 02:38:18.255843 | orchestrator | + source /opt/venv/bin/activate 2026-02-16 02:38:18.255860 | orchestrator | ++ deactivate nondestructive 2026-02-16 02:38:18.255874 | orchestrator | ++ '[' -n '' ']' 2026-02-16 02:38:18.255885 | orchestrator | ++ '[' -n '' ']' 2026-02-16 02:38:18.255896 | orchestrator | ++ hash -r 2026-02-16 02:38:18.255907 | orchestrator | ++ '[' -n '' ']' 2026-02-16 02:38:18.255918 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-16 02:38:18.255929 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-16 02:38:18.255940 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-16 02:38:18.255951 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-16 02:38:18.255962 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-16 02:38:18.255999 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-16 02:38:18.256012 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-16 02:38:18.256024 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-16 02:38:18.256054 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-16 02:38:18.256066 | orchestrator | ++ export PATH 2026-02-16 02:38:18.256089 | orchestrator | ++ '[' -n '' ']' 2026-02-16 02:38:18.256101 | orchestrator | ++ '[' -z '' ']' 2026-02-16 02:38:18.256111 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-16 02:38:18.256122 | orchestrator | ++ PS1='(venv) ' 2026-02-16 02:38:18.256133 | orchestrator | ++ export PS1 2026-02-16 02:38:18.256144 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-16 02:38:18.256155 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-16 02:38:18.256165 | orchestrator | ++ hash -r 2026-02-16 02:38:18.256177 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-16 02:38:19.264757 | orchestrator | 2026-02-16 02:38:19.264872 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-16 02:38:19.264889 | orchestrator | 2026-02-16 02:38:19.264901 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-16 02:38:19.787299 | orchestrator | ok: [testbed-manager] 2026-02-16 02:38:19.787403 | orchestrator | 2026-02-16 02:38:19.787419 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-16 02:38:20.721879 | orchestrator | changed: [testbed-manager] 2026-02-16 02:38:20.722008 | orchestrator | 2026-02-16 02:38:20.722094 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-16 02:38:20.722159 | orchestrator | 2026-02-16 02:38:20.722179 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-16 02:38:22.816992 | orchestrator | ok: [testbed-manager] 2026-02-16 02:38:22.817143 | orchestrator | 2026-02-16 02:38:22.817966 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-16 02:38:22.861036 | orchestrator | ok: [testbed-manager] 2026-02-16 02:38:22.861132 | orchestrator | 2026-02-16 02:38:22.861150 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-16 02:38:23.305843 | orchestrator | changed: [testbed-manager] 2026-02-16 02:38:23.305948 | orchestrator | 2026-02-16 02:38:23.305968 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-16 02:38:23.339716 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:38:23.339800 | orchestrator | 2026-02-16 02:38:23.339814 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-16 02:38:23.661794 | orchestrator | changed: [testbed-manager] 2026-02-16 02:38:23.661898 | orchestrator | 2026-02-16 02:38:23.661915 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-16 02:38:23.979482 | orchestrator | ok: [testbed-manager] 2026-02-16 02:38:23.979582 | orchestrator | 2026-02-16 02:38:23.979598 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-16 02:38:24.098125 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:38:24.098220 | orchestrator | 2026-02-16 02:38:24.098267 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-16 02:38:24.098280 | orchestrator | 2026-02-16 02:38:24.098292 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-16 02:38:26.783886 | orchestrator | ok: [testbed-manager] 2026-02-16 02:38:26.784008 | orchestrator | 2026-02-16 02:38:26.784026 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-16 02:38:26.885084 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-16 02:38:26.885157 | orchestrator | 2026-02-16 02:38:26.885167 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-16 02:38:26.935901 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-16 02:38:26.936013 | orchestrator | 2026-02-16 02:38:26.936036 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-16 02:38:27.996377 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-16 02:38:27.996480 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-16 02:38:27.996496 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-16 02:38:27.996508 | orchestrator | 2026-02-16 02:38:27.996524 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-16 02:38:29.729242 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-16 02:38:29.729397 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-16 02:38:29.729413 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-16 02:38:29.729426 | orchestrator | 2026-02-16 02:38:29.729439 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-16 02:38:30.362309 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-16 02:38:30.362410 | orchestrator | changed: [testbed-manager] 2026-02-16 02:38:30.362427 | orchestrator | 2026-02-16 02:38:30.362440 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-16 02:38:31.038072 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-16 02:38:31.038152 | orchestrator | changed: [testbed-manager] 2026-02-16 02:38:31.038163 | orchestrator | 2026-02-16 02:38:31.038171 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-16 02:38:31.090883 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:38:31.090979 | orchestrator | 2026-02-16 02:38:31.090994 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-16 02:38:31.429180 | orchestrator | ok: [testbed-manager] 2026-02-16 02:38:31.429369 | orchestrator | 2026-02-16 02:38:31.429400 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-16 02:38:31.494871 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-16 02:38:31.494968 | orchestrator | 2026-02-16 02:38:31.494985 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-16 02:38:32.578773 | orchestrator | changed: [testbed-manager] 2026-02-16 02:38:32.578883 | orchestrator | 2026-02-16 02:38:32.578901 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-16 02:38:33.379051 | orchestrator | changed: [testbed-manager] 2026-02-16 02:38:33.379150 | orchestrator | 2026-02-16 02:38:33.379167 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-16 02:38:43.905606 | orchestrator | changed: [testbed-manager] 2026-02-16 02:38:43.905715 | orchestrator | 2026-02-16 02:38:43.905730 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-16 02:38:43.969483 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:38:43.969558 | orchestrator | 2026-02-16 02:38:43.969587 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-16 02:38:43.969595 | orchestrator | 2026-02-16 02:38:43.969602 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-16 02:38:45.804675 | orchestrator | ok: [testbed-manager] 2026-02-16 02:38:45.804750 | orchestrator | 2026-02-16 02:38:45.804759 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-16 02:38:45.911252 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-16 02:38:45.911383 | orchestrator | 2026-02-16 02:38:45.911400 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-16 02:38:45.964810 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-16 02:38:45.964911 | orchestrator | 2026-02-16 02:38:45.964929 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-16 02:38:48.284670 | orchestrator | ok: [testbed-manager] 2026-02-16 02:38:48.284763 | orchestrator | 2026-02-16 02:38:48.284778 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-16 02:38:48.336448 | orchestrator | ok: [testbed-manager] 2026-02-16 02:38:48.336534 | orchestrator | 2026-02-16 02:38:48.336547 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-16 02:38:48.471601 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-16 02:38:48.471703 | orchestrator | 2026-02-16 02:38:48.471718 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-16 02:38:51.311294 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-16 02:38:51.311473 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-16 02:38:51.311491 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-16 02:38:51.311503 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-16 02:38:51.311514 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-16 02:38:51.311525 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-16 02:38:51.311536 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-16 02:38:51.311547 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-16 02:38:51.311558 | orchestrator | 2026-02-16 02:38:51.311572 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-16 02:38:51.949729 | orchestrator | changed: [testbed-manager] 2026-02-16 02:38:51.949834 | orchestrator | 2026-02-16 02:38:51.949853 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-16 02:38:52.566526 | orchestrator | changed: [testbed-manager] 2026-02-16 02:38:52.566629 | orchestrator | 2026-02-16 02:38:52.566647 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-16 02:38:52.646546 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-16 02:38:52.646668 | orchestrator | 2026-02-16 02:38:52.646687 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-16 02:38:53.828898 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-16 02:38:53.828987 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-16 02:38:53.829000 | orchestrator | 2026-02-16 02:38:53.829010 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-16 02:38:54.453541 | orchestrator | changed: [testbed-manager] 2026-02-16 02:38:54.453636 | orchestrator | 2026-02-16 02:38:54.453648 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-16 02:38:54.510951 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:38:54.511042 | orchestrator | 2026-02-16 02:38:54.511057 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-16 02:38:54.580935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-16 02:38:54.581033 | orchestrator | 2026-02-16 02:38:54.581057 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-16 02:38:55.234966 | orchestrator | changed: [testbed-manager] 2026-02-16 02:38:55.235061 | orchestrator | 2026-02-16 02:38:55.235088 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-16 02:38:55.301940 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-16 02:38:55.302065 | orchestrator | 2026-02-16 02:38:55.302077 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-16 02:38:56.627720 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-16 02:38:56.627818 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-16 02:38:56.627830 | orchestrator | changed: [testbed-manager] 2026-02-16 02:38:56.627839 | orchestrator | 2026-02-16 02:38:56.627847 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-16 02:38:57.258567 | orchestrator | changed: [testbed-manager] 2026-02-16 02:38:57.258652 | orchestrator | 2026-02-16 02:38:57.258665 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-16 02:38:57.315543 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:38:57.315645 | orchestrator | 2026-02-16 02:38:57.315662 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-16 02:38:57.413139 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-16 02:38:57.413227 | orchestrator | 2026-02-16 02:38:57.413239 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-16 02:38:57.948694 | orchestrator | changed: [testbed-manager] 2026-02-16 02:38:57.948767 | orchestrator | 2026-02-16 02:38:57.948774 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-16 02:38:58.370160 | orchestrator | changed: [testbed-manager] 2026-02-16 02:38:58.370245 | orchestrator | 2026-02-16 02:38:58.370256 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-16 02:38:59.589323 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-16 02:38:59.589459 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-16 02:38:59.589471 | orchestrator | 2026-02-16 02:38:59.589480 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-16 02:39:00.208220 | orchestrator | changed: [testbed-manager] 2026-02-16 02:39:00.208296 | orchestrator | 2026-02-16 02:39:00.208303 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-16 02:39:00.564533 | orchestrator | ok: [testbed-manager] 2026-02-16 02:39:00.564660 | orchestrator | 2026-02-16 02:39:00.564679 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-16 02:39:00.919081 | orchestrator | changed: [testbed-manager] 2026-02-16 02:39:00.919200 | orchestrator | 2026-02-16 02:39:00.919221 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-16 02:39:00.962135 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:39:00.962233 | orchestrator | 2026-02-16 02:39:00.962248 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-16 02:39:01.027649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-16 02:39:01.027768 | orchestrator | 2026-02-16 02:39:01.027783 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-16 02:39:01.065087 | orchestrator | ok: [testbed-manager] 2026-02-16 02:39:01.065178 | orchestrator | 2026-02-16 02:39:01.065191 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-16 02:39:03.049139 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-16 02:39:03.049257 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-16 02:39:03.049283 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-16 02:39:03.049304 | orchestrator | 2026-02-16 02:39:03.049324 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-16 02:39:03.746847 | orchestrator | changed: [testbed-manager] 2026-02-16 02:39:03.746938 | orchestrator | 2026-02-16 02:39:03.746951 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-16 02:39:04.454927 | orchestrator | changed: [testbed-manager] 2026-02-16 02:39:04.455025 | orchestrator | 2026-02-16 02:39:04.455042 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-16 02:39:05.159011 | orchestrator | changed: [testbed-manager] 2026-02-16 02:39:05.159141 | orchestrator | 2026-02-16 02:39:05.159172 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-16 02:39:05.224782 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-16 02:39:05.225743 | orchestrator | 2026-02-16 02:39:05.225816 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-16 02:39:05.266269 | orchestrator | ok: [testbed-manager] 2026-02-16 02:39:05.266334 | orchestrator | 2026-02-16 02:39:05.266341 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-16 02:39:05.944744 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-16 02:39:05.944811 | orchestrator | 2026-02-16 02:39:05.944818 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-16 02:39:06.030447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-16 02:39:06.030528 | orchestrator | 2026-02-16 02:39:06.030541 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-16 02:39:06.762734 | orchestrator | changed: [testbed-manager] 2026-02-16 02:39:06.762836 | orchestrator | 2026-02-16 02:39:06.762854 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-16 02:39:07.351633 | orchestrator | ok: [testbed-manager] 2026-02-16 02:39:07.351765 | orchestrator | 2026-02-16 02:39:07.351794 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-16 02:39:07.410354 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:39:07.410487 | orchestrator | 2026-02-16 02:39:07.410504 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-16 02:39:07.460010 | orchestrator | ok: [testbed-manager] 2026-02-16 02:39:07.460094 | orchestrator | 2026-02-16 02:39:07.460101 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-16 02:39:08.274775 | orchestrator | changed: [testbed-manager] 2026-02-16 02:39:08.274857 | orchestrator | 2026-02-16 02:39:08.274864 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-16 02:40:18.680352 | orchestrator | changed: [testbed-manager] 2026-02-16 02:40:18.680456 | orchestrator | 2026-02-16 02:40:18.680469 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-16 02:40:19.650577 | orchestrator | ok: [testbed-manager] 2026-02-16 02:40:19.650671 | orchestrator | 2026-02-16 02:40:19.650685 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-16 02:40:19.710250 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:40:19.710340 | orchestrator | 2026-02-16 02:40:19.710351 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-16 02:40:21.949472 | orchestrator | changed: [testbed-manager] 2026-02-16 02:40:21.949585 | orchestrator | 2026-02-16 02:40:21.949602 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-16 02:40:22.000489 | orchestrator | ok: [testbed-manager] 2026-02-16 02:40:22.000591 | orchestrator | 2026-02-16 02:40:22.000608 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-16 02:40:22.000621 | orchestrator | 2026-02-16 02:40:22.000633 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-16 02:40:22.157944 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:40:22.158100 | orchestrator | 2026-02-16 02:40:22.158118 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-16 02:41:22.207549 | orchestrator | Pausing for 60 seconds 2026-02-16 02:41:22.207691 | orchestrator | changed: [testbed-manager] 2026-02-16 02:41:22.207718 | orchestrator | 2026-02-16 02:41:22.207732 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-16 02:41:24.760711 | orchestrator | changed: [testbed-manager] 2026-02-16 02:41:24.760815 | orchestrator | 2026-02-16 02:41:24.760831 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-16 02:42:26.802274 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-16 02:42:26.802394 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-16 02:42:26.802430 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-16 02:42:26.802448 | orchestrator | changed: [testbed-manager] 2026-02-16 02:42:26.802467 | orchestrator | 2026-02-16 02:42:26.802487 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-16 02:42:36.526440 | orchestrator | changed: [testbed-manager] 2026-02-16 02:42:36.526562 | orchestrator | 2026-02-16 02:42:36.526578 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-16 02:42:36.602428 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-16 02:42:36.602534 | orchestrator | 2026-02-16 02:42:36.602551 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-16 02:42:36.602564 | orchestrator | 2026-02-16 02:42:36.602575 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-16 02:42:36.651172 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:42:36.651325 | orchestrator | 2026-02-16 02:42:36.651375 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-16 02:42:36.722792 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-16 02:42:36.722878 | orchestrator | 2026-02-16 02:42:36.722889 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-16 02:42:37.448672 | orchestrator | changed: [testbed-manager] 2026-02-16 02:42:37.448770 | orchestrator | 2026-02-16 02:42:37.448785 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-16 02:42:40.590655 | orchestrator | ok: [testbed-manager] 2026-02-16 02:42:40.590746 | orchestrator | 2026-02-16 02:42:40.590757 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-16 02:42:40.662806 | orchestrator | ok: [testbed-manager] => { 2026-02-16 02:42:40.662891 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-16 02:42:40.662902 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-16 02:42:40.662910 | orchestrator | "Checking running containers against expected versions...", 2026-02-16 02:42:40.662919 | orchestrator | "", 2026-02-16 02:42:40.662927 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-16 02:42:40.662935 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-16 02:42:40.662944 | orchestrator | " Enabled: true", 2026-02-16 02:42:40.662951 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-16 02:42:40.662958 | orchestrator | " Status: ✅ MATCH", 2026-02-16 02:42:40.662966 | orchestrator | "", 2026-02-16 02:42:40.662974 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-16 02:42:40.663004 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-16 02:42:40.663011 | orchestrator | " Enabled: true", 2026-02-16 02:42:40.663019 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-16 02:42:40.663026 | orchestrator | " Status: ✅ MATCH", 2026-02-16 02:42:40.663033 | orchestrator | "", 2026-02-16 02:42:40.663040 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-16 02:42:40.663047 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-16 02:42:40.663055 | orchestrator | " Enabled: true", 2026-02-16 02:42:40.663062 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-16 02:42:40.663069 | orchestrator | " Status: ✅ MATCH", 2026-02-16 02:42:40.663076 | orchestrator | "", 2026-02-16 02:42:40.663083 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-16 02:42:40.663090 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-16 02:42:40.663097 | orchestrator | " Enabled: true", 2026-02-16 02:42:40.663105 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-16 02:42:40.663111 | orchestrator | " Status: ✅ MATCH", 2026-02-16 02:42:40.663118 | orchestrator | "", 2026-02-16 02:42:40.663127 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-16 02:42:40.663135 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-16 02:42:40.663142 | orchestrator | " Enabled: true", 2026-02-16 02:42:40.663149 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-16 02:42:40.663156 | orchestrator | " Status: ✅ MATCH", 2026-02-16 02:42:40.663163 | orchestrator | "", 2026-02-16 02:42:40.663170 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-16 02:42:40.663177 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-16 02:42:40.663184 | orchestrator | " Enabled: true", 2026-02-16 02:42:40.663191 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-16 02:42:40.663198 | orchestrator | " Status: ✅ MATCH", 2026-02-16 02:42:40.663205 | orchestrator | "", 2026-02-16 02:42:40.663259 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-16 02:42:40.663266 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-16 02:42:40.663273 | orchestrator | " Enabled: true", 2026-02-16 02:42:40.663280 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-16 02:42:40.663287 | orchestrator | " Status: ✅ MATCH", 2026-02-16 02:42:40.663294 | orchestrator | "", 2026-02-16 02:42:40.663301 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-16 02:42:40.663307 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-16 02:42:40.663316 | orchestrator | " Enabled: true", 2026-02-16 02:42:40.663323 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-16 02:42:40.663330 | orchestrator | " Status: ✅ MATCH", 2026-02-16 02:42:40.663337 | orchestrator | "", 2026-02-16 02:42:40.663344 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-16 02:42:40.663351 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-16 02:42:40.663358 | orchestrator | " Enabled: true", 2026-02-16 02:42:40.663366 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-16 02:42:40.663374 | orchestrator | " Status: ✅ MATCH", 2026-02-16 02:42:40.663382 | orchestrator | "", 2026-02-16 02:42:40.663389 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-16 02:42:40.663397 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-16 02:42:40.663405 | orchestrator | " Enabled: true", 2026-02-16 02:42:40.663413 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-16 02:42:40.663421 | orchestrator | " Status: ✅ MATCH", 2026-02-16 02:42:40.663428 | orchestrator | "", 2026-02-16 02:42:40.663436 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-16 02:42:40.663451 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-16 02:42:40.663461 | orchestrator | " Enabled: true", 2026-02-16 02:42:40.663470 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-16 02:42:40.663480 | orchestrator | " Status: ✅ MATCH", 2026-02-16 02:42:40.663489 | orchestrator | "", 2026-02-16 02:42:40.663496 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-16 02:42:40.663501 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-16 02:42:40.663507 | orchestrator | " Enabled: true", 2026-02-16 02:42:40.663513 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-16 02:42:40.663521 | orchestrator | " Status: ✅ MATCH", 2026-02-16 02:42:40.663531 | orchestrator | "", 2026-02-16 02:42:40.663540 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-16 02:42:40.663549 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-16 02:42:40.663558 | orchestrator | " Enabled: true", 2026-02-16 02:42:40.663568 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-16 02:42:40.663578 | orchestrator | " Status: ✅ MATCH", 2026-02-16 02:42:40.663588 | orchestrator | "", 2026-02-16 02:42:40.663597 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-16 02:42:40.663609 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-16 02:42:40.663618 | orchestrator | " Enabled: true", 2026-02-16 02:42:40.663626 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-16 02:42:40.663652 | orchestrator | " Status: ✅ MATCH", 2026-02-16 02:42:40.663661 | orchestrator | "", 2026-02-16 02:42:40.663669 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-16 02:42:40.663677 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-16 02:42:40.663693 | orchestrator | " Enabled: true", 2026-02-16 02:42:40.663701 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-16 02:42:40.663710 | orchestrator | " Status: ✅ MATCH", 2026-02-16 02:42:40.663718 | orchestrator | "", 2026-02-16 02:42:40.663725 | orchestrator | "=== Summary ===", 2026-02-16 02:42:40.663732 | orchestrator | "Errors (version mismatches): 0", 2026-02-16 02:42:40.663739 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-16 02:42:40.663746 | orchestrator | "", 2026-02-16 02:42:40.663753 | orchestrator | "✅ All running containers match expected versions!" 2026-02-16 02:42:40.663760 | orchestrator | ] 2026-02-16 02:42:40.663768 | orchestrator | } 2026-02-16 02:42:40.663775 | orchestrator | 2026-02-16 02:42:40.663784 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-16 02:42:40.710001 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:42:40.710125 | orchestrator | 2026-02-16 02:42:40.710135 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 02:42:40.710145 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-16 02:42:40.710153 | orchestrator | 2026-02-16 02:42:40.814147 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-16 02:42:40.814267 | orchestrator | + deactivate 2026-02-16 02:42:40.814280 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-16 02:42:40.814290 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-16 02:42:40.814298 | orchestrator | + export PATH 2026-02-16 02:42:40.814307 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-16 02:42:40.814316 | orchestrator | + '[' -n '' ']' 2026-02-16 02:42:40.814324 | orchestrator | + hash -r 2026-02-16 02:42:40.814332 | orchestrator | + '[' -n '' ']' 2026-02-16 02:42:40.814341 | orchestrator | + unset VIRTUAL_ENV 2026-02-16 02:42:40.814349 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-16 02:42:40.814357 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-16 02:42:40.814365 | orchestrator | + unset -f deactivate 2026-02-16 02:42:40.814374 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-16 02:42:40.822943 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-16 02:42:40.822962 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-16 02:42:40.822996 | orchestrator | + local max_attempts=60 2026-02-16 02:42:40.823005 | orchestrator | + local name=ceph-ansible 2026-02-16 02:42:40.823013 | orchestrator | + local attempt_num=1 2026-02-16 02:42:40.823498 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-16 02:42:40.854288 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-16 02:42:40.854311 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-16 02:42:40.854320 | orchestrator | + local max_attempts=60 2026-02-16 02:42:40.854329 | orchestrator | + local name=kolla-ansible 2026-02-16 02:42:40.854337 | orchestrator | + local attempt_num=1 2026-02-16 02:42:40.854858 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-16 02:42:40.890807 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-16 02:42:40.890873 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-16 02:42:40.890884 | orchestrator | + local max_attempts=60 2026-02-16 02:42:40.890895 | orchestrator | + local name=osism-ansible 2026-02-16 02:42:40.890907 | orchestrator | + local attempt_num=1 2026-02-16 02:42:40.891749 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-16 02:42:40.925316 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-16 02:42:40.925396 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-16 02:42:40.925409 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-16 02:42:41.571296 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-16 02:42:41.758321 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-16 02:42:41.758419 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-02-16 02:42:41.758436 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-02-16 02:42:41.758448 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-16 02:42:41.758461 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-02-16 02:42:41.758494 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-02-16 02:42:41.758506 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-02-16 02:42:41.758517 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-02-16 02:42:41.758528 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-02-16 02:42:41.758539 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-02-16 02:42:41.758550 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-02-16 02:42:41.758561 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-02-16 02:42:41.758572 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-02-16 02:42:41.758604 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-02-16 02:42:41.758616 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-02-16 02:42:41.758629 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-02-16 02:42:41.764535 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-16 02:42:41.808826 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-16 02:42:41.808931 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-16 02:42:41.812946 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-16 02:42:53.968960 | orchestrator | 2026-02-16 02:42:53 | INFO  | Task 61a0bf79-d766-497f-a5a3-28d9127c30b7 (resolvconf) was prepared for execution. 2026-02-16 02:42:53.969064 | orchestrator | 2026-02-16 02:42:53 | INFO  | It takes a moment until task 61a0bf79-d766-497f-a5a3-28d9127c30b7 (resolvconf) has been started and output is visible here. 2026-02-16 02:43:07.685199 | orchestrator | 2026-02-16 02:43:07.685354 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-16 02:43:07.685372 | orchestrator | 2026-02-16 02:43:07.685384 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-16 02:43:07.685396 | orchestrator | Monday 16 February 2026 02:42:58 +0000 (0:00:00.140) 0:00:00.140 ******* 2026-02-16 02:43:07.685407 | orchestrator | ok: [testbed-manager] 2026-02-16 02:43:07.685419 | orchestrator | 2026-02-16 02:43:07.685430 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-16 02:43:07.685442 | orchestrator | Monday 16 February 2026 02:43:01 +0000 (0:00:03.708) 0:00:03.849 ******* 2026-02-16 02:43:07.685454 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:43:07.685466 | orchestrator | 2026-02-16 02:43:07.685477 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-16 02:43:07.685488 | orchestrator | Monday 16 February 2026 02:43:01 +0000 (0:00:00.076) 0:00:03.925 ******* 2026-02-16 02:43:07.685499 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-16 02:43:07.685511 | orchestrator | 2026-02-16 02:43:07.685522 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-16 02:43:07.685533 | orchestrator | Monday 16 February 2026 02:43:01 +0000 (0:00:00.082) 0:00:04.007 ******* 2026-02-16 02:43:07.685564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-16 02:43:07.685576 | orchestrator | 2026-02-16 02:43:07.685587 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-16 02:43:07.685598 | orchestrator | Monday 16 February 2026 02:43:01 +0000 (0:00:00.070) 0:00:04.078 ******* 2026-02-16 02:43:07.685609 | orchestrator | ok: [testbed-manager] 2026-02-16 02:43:07.685620 | orchestrator | 2026-02-16 02:43:07.685631 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-16 02:43:07.685642 | orchestrator | Monday 16 February 2026 02:43:03 +0000 (0:00:01.068) 0:00:05.146 ******* 2026-02-16 02:43:07.685652 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:43:07.685663 | orchestrator | 2026-02-16 02:43:07.685674 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-16 02:43:07.685685 | orchestrator | Monday 16 February 2026 02:43:03 +0000 (0:00:00.061) 0:00:05.208 ******* 2026-02-16 02:43:07.685721 | orchestrator | ok: [testbed-manager] 2026-02-16 02:43:07.685733 | orchestrator | 2026-02-16 02:43:07.685745 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-16 02:43:07.685758 | orchestrator | Monday 16 February 2026 02:43:03 +0000 (0:00:00.512) 0:00:05.720 ******* 2026-02-16 02:43:07.685770 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:43:07.685782 | orchestrator | 2026-02-16 02:43:07.685795 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-16 02:43:07.685809 | orchestrator | Monday 16 February 2026 02:43:03 +0000 (0:00:00.079) 0:00:05.800 ******* 2026-02-16 02:43:07.685821 | orchestrator | changed: [testbed-manager] 2026-02-16 02:43:07.685833 | orchestrator | 2026-02-16 02:43:07.685845 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-16 02:43:07.685858 | orchestrator | Monday 16 February 2026 02:43:04 +0000 (0:00:00.542) 0:00:06.342 ******* 2026-02-16 02:43:07.685870 | orchestrator | changed: [testbed-manager] 2026-02-16 02:43:07.685882 | orchestrator | 2026-02-16 02:43:07.685896 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-16 02:43:07.685909 | orchestrator | Monday 16 February 2026 02:43:05 +0000 (0:00:01.110) 0:00:07.453 ******* 2026-02-16 02:43:07.685922 | orchestrator | ok: [testbed-manager] 2026-02-16 02:43:07.685934 | orchestrator | 2026-02-16 02:43:07.685947 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-16 02:43:07.685960 | orchestrator | Monday 16 February 2026 02:43:06 +0000 (0:00:00.930) 0:00:08.383 ******* 2026-02-16 02:43:07.685973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-16 02:43:07.685986 | orchestrator | 2026-02-16 02:43:07.685997 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-16 02:43:07.686008 | orchestrator | Monday 16 February 2026 02:43:06 +0000 (0:00:00.082) 0:00:08.465 ******* 2026-02-16 02:43:07.686075 | orchestrator | changed: [testbed-manager] 2026-02-16 02:43:07.686087 | orchestrator | 2026-02-16 02:43:07.686098 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 02:43:07.686110 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-16 02:43:07.686121 | orchestrator | 2026-02-16 02:43:07.686132 | orchestrator | 2026-02-16 02:43:07.686143 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 02:43:07.686153 | orchestrator | Monday 16 February 2026 02:43:07 +0000 (0:00:01.122) 0:00:09.587 ******* 2026-02-16 02:43:07.686164 | orchestrator | =============================================================================== 2026-02-16 02:43:07.686175 | orchestrator | Gathering Facts --------------------------------------------------------- 3.71s 2026-02-16 02:43:07.686185 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.12s 2026-02-16 02:43:07.686196 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.11s 2026-02-16 02:43:07.686207 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.07s 2026-02-16 02:43:07.686217 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.93s 2026-02-16 02:43:07.686228 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.54s 2026-02-16 02:43:07.686257 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.51s 2026-02-16 02:43:07.686269 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-02-16 02:43:07.686280 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-02-16 02:43:07.686324 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-02-16 02:43:07.686336 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.08s 2026-02-16 02:43:07.686347 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-02-16 02:43:07.686367 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-02-16 02:43:07.981809 | orchestrator | + osism apply sshconfig 2026-02-16 02:43:20.017726 | orchestrator | 2026-02-16 02:43:20 | INFO  | Task 7f498234-33ec-453b-8dcc-60485a19b5f9 (sshconfig) was prepared for execution. 2026-02-16 02:43:20.017845 | orchestrator | 2026-02-16 02:43:20 | INFO  | It takes a moment until task 7f498234-33ec-453b-8dcc-60485a19b5f9 (sshconfig) has been started and output is visible here. 2026-02-16 02:43:31.970609 | orchestrator | 2026-02-16 02:43:31.970784 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-16 02:43:31.970804 | orchestrator | 2026-02-16 02:43:31.970837 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-16 02:43:31.970849 | orchestrator | Monday 16 February 2026 02:43:24 +0000 (0:00:00.182) 0:00:00.182 ******* 2026-02-16 02:43:31.970861 | orchestrator | ok: [testbed-manager] 2026-02-16 02:43:31.970877 | orchestrator | 2026-02-16 02:43:31.970896 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-16 02:43:31.970914 | orchestrator | Monday 16 February 2026 02:43:24 +0000 (0:00:00.617) 0:00:00.800 ******* 2026-02-16 02:43:31.970933 | orchestrator | changed: [testbed-manager] 2026-02-16 02:43:31.970951 | orchestrator | 2026-02-16 02:43:31.970971 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-16 02:43:31.970990 | orchestrator | Monday 16 February 2026 02:43:25 +0000 (0:00:00.526) 0:00:01.327 ******* 2026-02-16 02:43:31.971009 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-16 02:43:31.971028 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-16 02:43:31.971041 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-16 02:43:31.971052 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-16 02:43:31.971063 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-16 02:43:31.971074 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-16 02:43:31.971085 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-16 02:43:31.971096 | orchestrator | 2026-02-16 02:43:31.971109 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-16 02:43:31.971128 | orchestrator | Monday 16 February 2026 02:43:31 +0000 (0:00:05.706) 0:00:07.033 ******* 2026-02-16 02:43:31.971145 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:43:31.971162 | orchestrator | 2026-02-16 02:43:31.971180 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-16 02:43:31.971198 | orchestrator | Monday 16 February 2026 02:43:31 +0000 (0:00:00.070) 0:00:07.104 ******* 2026-02-16 02:43:31.971217 | orchestrator | changed: [testbed-manager] 2026-02-16 02:43:31.971235 | orchestrator | 2026-02-16 02:43:31.971254 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 02:43:31.971274 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 02:43:31.971291 | orchestrator | 2026-02-16 02:43:31.971309 | orchestrator | 2026-02-16 02:43:31.971328 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 02:43:31.971347 | orchestrator | Monday 16 February 2026 02:43:31 +0000 (0:00:00.555) 0:00:07.659 ******* 2026-02-16 02:43:31.971390 | orchestrator | =============================================================================== 2026-02-16 02:43:31.971410 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.71s 2026-02-16 02:43:31.971428 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.62s 2026-02-16 02:43:31.971447 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2026-02-16 02:43:31.971464 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2026-02-16 02:43:31.971518 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-02-16 02:43:32.256935 | orchestrator | + osism apply known-hosts 2026-02-16 02:43:44.303252 | orchestrator | 2026-02-16 02:43:44 | INFO  | Task 2a92c817-8638-4972-9fc7-2c948623c75d (known-hosts) was prepared for execution. 2026-02-16 02:43:44.303387 | orchestrator | 2026-02-16 02:43:44 | INFO  | It takes a moment until task 2a92c817-8638-4972-9fc7-2c948623c75d (known-hosts) has been started and output is visible here. 2026-02-16 02:43:59.971416 | orchestrator | 2026-02-16 02:43:59.971610 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-16 02:43:59.971629 | orchestrator | 2026-02-16 02:43:59.971641 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-16 02:43:59.971653 | orchestrator | Monday 16 February 2026 02:43:48 +0000 (0:00:00.120) 0:00:00.120 ******* 2026-02-16 02:43:59.971664 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-16 02:43:59.971676 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-16 02:43:59.971688 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-16 02:43:59.971699 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-16 02:43:59.971710 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-16 02:43:59.971721 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-16 02:43:59.971732 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-16 02:43:59.971743 | orchestrator | 2026-02-16 02:43:59.971754 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-16 02:43:59.971766 | orchestrator | Monday 16 February 2026 02:43:53 +0000 (0:00:05.585) 0:00:05.706 ******* 2026-02-16 02:43:59.971778 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-16 02:43:59.971791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-16 02:43:59.971802 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-16 02:43:59.971813 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-16 02:43:59.971824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-16 02:43:59.971845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-16 02:43:59.971857 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-16 02:43:59.971868 | orchestrator | 2026-02-16 02:43:59.971879 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-16 02:43:59.971890 | orchestrator | Monday 16 February 2026 02:43:53 +0000 (0:00:00.148) 0:00:05.855 ******* 2026-02-16 02:43:59.971901 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBRBBj+Oa/VK8N/YxdhyjYvnrllgYSDx3jwHW4cICogl) 2026-02-16 02:43:59.971921 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDaigbDcsPyD60cnH3XkzVyghg+NEzwF2UtymlMdPX4m4rPumZkmYcOgQHuC3iIyn5Q091IDOzvmeta2/aDOTL5DirHa++TabI2Npw6yBeSG0AmUIzEZ1OJH0rbOXLue6nnkowtFPAe6XfA004AF6E4TPvC6nGUPcHGkwKJtZznXtqz3DLM73yduZxKChUgsLvjkA9cPaVu2mmpkX7QZJaveNejID/hpQj/v4+XFLOSvjUlhJp42ANURUw1JKJ7b/H5DB5mmcpxY8opxilazhoXOgXTOSCECghChXCMfFlnzh3tZlJ5wos5hw5hkj8L4CZzwqN4NjcQSPhnhU+eTSDJosbZZl6x+UXZf7RxZJqPD/5XvQi+R7noGL6E08r8xjC7I3hcPDeF8k+TndWod7PSBL7Ee3Uhm6Ek4n2NGwgedrLTm3PtkicTL0j99dIed2a3bWBBJR1Sh/l0cd76dM4eM46kNZWDV0dxOFyjCO4yV6G09CCsM262gx/c2WA90Bs=) 2026-02-16 02:43:59.971960 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLcsg8deK86a6lkIYpLrlVoyp78ORvMrB7kRC60y7vAcNM9sR3J19rblpl7GmCmLA/gcHNq/cp8SJR2bBofJmXE=) 2026-02-16 02:43:59.971975 | orchestrator | 2026-02-16 02:43:59.971988 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-16 02:43:59.972001 | orchestrator | Monday 16 February 2026 02:43:54 +0000 (0:00:01.076) 0:00:06.931 ******* 2026-02-16 02:43:59.972034 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXqCmNL8BPRjHwhdznW/hJlH0Mqr5FVciky3wXRPDDt5aIgQafQYYj3qL/HJ8tK9AmN3ZNHWWdfZMq4Oo46tG3KBjs90QYQpGx+8KhBMnqktr5u1tLHHAalsNzoC8dY1AW4ao2uCM7l+Yfb2FiY4zg2nlkrI2qmCQ61RbkLYEEUk0B90LpKgnFL4rbJkk5mNj9d2xNcAIf4P3+tOcbm+2miN/itbOSBQxx8YZ97ekxpIjJ2Tp44SwO6NxsLgwMRd6chpH7RMGDeA80Y9eOFca45A5eGYgcZrgNiuj2VDgekvVmoXpHJb7Pn6bi9XCYFMAWg/7MzRwSteERIHpKqKjfwL4bVUG4o2g/CPlCL4wAsufLt5IaINhe3KlxJ2BBLtiKrlN4JwHhJUnFBLo7xwkqGJcE56ZYBxqwuWQ9eT4ZbTE1SEtqnM2ngXkVJnEWqbtgVOCYDIcAos5/75OkP63NzpyyR3mTBCb/FkAA2DB38Cgcw9hCLIBadC79LmX8SkU=) 2026-02-16 02:43:59.972047 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF9CmJPTi1A8huAgibZG64+6zG+QsGzp7ISy98JPZcTyqf0nnI5OobONffe+oAu83v/dcaKuRg7QNtWpQvng2vU=) 2026-02-16 02:43:59.972058 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB2B3CpAjFxxsDMjlQpKKOcfvb473sHCPm4SXxe8Q1UZ) 2026-02-16 02:43:59.972069 | orchestrator | 2026-02-16 02:43:59.972080 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-16 02:43:59.972091 | orchestrator | Monday 16 February 2026 02:43:55 +0000 (0:00:01.004) 0:00:07.936 ******* 2026-02-16 02:43:59.972102 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMN0NTN56GaXdFVYI+k+TBIXke1RnF3gbAIvhvXgUi6d) 2026-02-16 02:43:59.972113 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCc7wDd4xYlbABeKF4Rkm/E2e7hA4w9GUFGXJCwUJPRcnQwhg8a55rO1ZNOfBxOf1TERCPr62Jh+AowFOfKuNZv7pINTtv2RosDq9D9+pll3o2V4X9vTIqnHqHd5nb5asUzWVGdI1+Fz0tssNhDkXOi/+oMAhN6doAr6tELOkJ91zOn0DfI0jZwwT+NxpS/p71bRToACf42EG5kwQnfcvubtP8rdF45LK5ViAFJPq6PKGjpmM6tHO7rRLMHtdKIwtFhw61xggCbUOtvjuhHkENm+OQbJcWivMWD14fiPwUP5jWDffzgoYgMxlXnJv9b2LBMu5jLo60lZRez53YLlyaseCV7ChOwfTsO3M1u/7kjMVE34AtOrbe9k0MC3yHe74RwtZPv0HY8lGkd6xwe1MjZqb8u4cZWQStn6KGIn5EaJceLIv8PxOTmZ5Hnz2JkjHpxuO6XUK9IdJ9NJKewqIRbM0zKm4LXbqDS9Zt2yJT1VYPZV7Nt8eYV/+H4ArpIWdM=) 2026-02-16 02:43:59.972125 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLqGiI+ocEByAVZNpIbvwZUK8yB5PKXlO0q+9eHWVSfltJRxC1DFQah86m2fI3ptT8cuMPtDPbNMTdI74ri5BNA=) 2026-02-16 02:43:59.972136 | orchestrator | 2026-02-16 02:43:59.972147 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-16 02:43:59.972158 | orchestrator | Monday 16 February 2026 02:43:56 +0000 (0:00:01.029) 0:00:08.966 ******* 2026-02-16 02:43:59.972169 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpEWGTPr6v5IPwI2zegZgi14Eq0a1vV6h3L7uIxN5NT9NeMVqf3gJ2fUKxvY9Og06QOGckaLEyFu0d3SN6EsvvDq8Q0PDVecGVqTW+8mnjdxpvaoSKunpmtFwfnkZh3xzqVlwbzLLUg9uKKlXTrnAvU7Y20PtbewepHfwbeZTiCj/60TF1/oz9oM3m9pvqRR+Mw2z/d3SSvNsZ8Rl9GUtLcXr/EnpcDgoiUho2Xk5wtICaJuliLcffMI4rvmeSbhu2C0CH/eesKcY2o6NnrF+0+c9/gqQ0rPwy//SJf7LeCxseyNM2mEH5ItOypqzcJwxwUJEZ3cM9nkr5g0ybc05luF6jizK8mwPIBPK5nqt6mtl1ZwvkAlYMGt59cMrBPt9pq/GhUerTkWiCMk7ba9lPxtA44RM+QAw2d+teFAGH4BtSjKQf+TQZwCnp8u1IdgQFFl5GH0jaheGOXeg2z/wIDoIayMR6jRBl3Oc1Yhjmxiqd16FECSVVVzJvwNnblpM=) 2026-02-16 02:43:59.972190 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNc8+1w41g5cZUcl2IIxerCh9NdAOJfQYl21fYrvqFCKox49An5vxoYdTzaQoj9hmE5ANTs9aVSF/M1hg4R+Cuw=) 2026-02-16 02:43:59.972205 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKMi/27qokzslnsXe5GWSobsEHuKxmRvQfXr0ZRAtMYa) 2026-02-16 02:43:59.972228 | orchestrator | 2026-02-16 02:43:59.972254 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-16 02:43:59.972273 | orchestrator | Monday 16 February 2026 02:43:57 +0000 (0:00:01.022) 0:00:09.988 ******* 2026-02-16 02:43:59.972386 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE22XGwAnysTsrP5aqC4zMVRtEVQxTHkU7n5vZXltHkZ) 2026-02-16 02:43:59.972408 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDzD8WywV7Py01oaaaqSYWovhnkG8HKOSru1IfmkljEQbJgOPRXCUOPWIAOSlVTj3yJ8o37RXZ+0yxVd1npR6NnQQ2j/uK4SLys7TIdXKKwK2TUz5wAUBgnF/m1EMgqS/h1m0C5I49eiT06UFiAXBCJWrXRJpt+hQwANfy5y0oiQzXeNeQNg1MT+sa1BPP5me7Cutn68H/E7YZJfZ1YUPaHW50vO0exIuG5fZ9WXcW7y1NEzxWjBTar/LPqp6k0Lz0wcVA67GlZ8q4cffoxHSGag7KYOF2TwbnUOMFl7mNBg3Y33Id3Wfq1yi73x09jBGB9VRZA3sSOFp64Eppk2WFtOKhyF5XoPuhR6gu4PYyBAoc251w/kQSkkNWxIlCfwnVTrbCw6HXUgML8KgFhGB6w7L+u0bqcvEpHlxM94quPYhHMKyMsTHtUFnpf4Q9ldAvbYA7Xlo3AwaVpbxsJE0W8XQAOgZddhff7F5KFf+yy5iF2J+mWXQMx7NlZInf1lKk=) 2026-02-16 02:43:59.972426 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJeS6SSNv6Bh8WV+gjNvnszIbXpPpQ4Qw7+LAjjR1CINXu1jPiDzX3baoJmnn3EUs2aQaVBDFexTxWpuMzC7jnM=) 2026-02-16 02:43:59.972441 | orchestrator | 2026-02-16 02:43:59.972489 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-16 02:43:59.972508 | orchestrator | Monday 16 February 2026 02:43:58 +0000 (0:00:01.031) 0:00:11.020 ******* 2026-02-16 02:43:59.972538 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAZDvYdyrOb//s4hYK+bF0GbN7bhgR/2itgbRee8BVViIEOWOOUOZok7Iuo5NgsaIzmug9IMfWZfP2z/kG5ND+4=) 2026-02-16 02:44:10.439964 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCm4Lg3qlE+diB/9RlLVNHpmjUhdmAMQD1x9cDguw6IoHZPPosE/AJG/nhl4Bg+aWqITqR3epQwsZn6Tpfe78Z5akkbW+MIl3Zzbp+FpgAByL9hwPaxUP1OX36sYliev34J4gXQTE32PrYSiFQ3W5VCQuh55QzC3eGVHmXlW2SMgN4/ky0B7LutKA3HMPaHHmcFN5b5tlx5mLKX5kTIMkE8oDTJWLUQvp4VnplswxhlJXq5Kvobjzsj+fpqoqpMrHDDhqmbZy5eDdbknRXk6tX7RQRukuqs0WxIbQmon0dJq/yHuXPhzQCd8QPjhHjqYji5B4ZSVnZ+MhcW7KO+R7np7eAtt9beFpqvy92wbfyXiiAh0NSPH4B71BYtqFj67rDE+tdsOuXchNSochgoiQUQvXQiOE7H2PsJU2iWlzZeE8DpP+UC2VqzEc/PqMqko898aKM1vVp9sD9euKg+ws6JV3gin7taGOpiy/bGgtJtAti7yZBy2ouv1fHjCG98ZZE=) 2026-02-16 02:44:10.440083 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPwlZJGQP3sdJRbITi7S9iSn14Eu9TRqY+LmQ9fmzJdd) 2026-02-16 02:44:10.440102 | orchestrator | 2026-02-16 02:44:10.440115 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-16 02:44:10.440128 | orchestrator | Monday 16 February 2026 02:43:59 +0000 (0:00:01.004) 0:00:12.024 ******* 2026-02-16 02:44:10.440140 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIcsogeIKWPm3k+xvVy9mJBQ8cAyd6F7nOy+E4lgr4kH) 2026-02-16 02:44:10.440152 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCNccWFkdXigJdPOCO9HtDP0CS0oI0t0VNsm/TtI3cnKeQRH94YPK3S9ui5qIflmaJ4SXmb6mzqkq+IK6YRC3/mKdTkMqme8MXX86BRf7iiAyOQx03mCwGAkgo/cvtUP9NoGK2QsrbyeuRT2SgzWpcrqFuFhme2LkfQfqvn1NSWq8Ks4yRcdP6hrHL6uTPjW/55+NQ7yjShgq5XMIqswCHradtTC9wi0MTrdw+Uxo7vx+5IgsnNhsBqZw3GNBx9UdGMkeAdx9JkhvvzJgBptB9HU1q8VYIoH+eJwDp1HogTKbY4Fdh7T/XafBvLMglXpP/oi33q7kLYvgelFf7hC0IGi5mWhVvYd4C1uHG4vtCBlqgnj1TKZiBB8kM8BDY3PaO9MlTy+xToE3L02Sf0uvsGXSz/RQ24MdQLlSzQb4PbYB9tdLmzr8rpEZpGgG1w+atqqj1O/Ae8t4hLCC6GUdtsZuMkrQtzeLYvlGg9pfYgIOgRBY5GQ2Ex6AGc+b9Lhnk=) 2026-02-16 02:44:10.440187 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNZIkyP2Bx+kfFf3Gl/K6AEiT0sWNF2NO04TNpcY/JRWGv4p0jgZKPBsoDBgoTWpFHqh/YR4XFgozAQ5511DQlg=) 2026-02-16 02:44:10.440200 | orchestrator | 2026-02-16 02:44:10.440212 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-16 02:44:10.440224 | orchestrator | Monday 16 February 2026 02:44:00 +0000 (0:00:01.017) 0:00:13.041 ******* 2026-02-16 02:44:10.440235 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-16 02:44:10.440246 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-16 02:44:10.440257 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-16 02:44:10.440268 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-16 02:44:10.440279 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-16 02:44:10.440289 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-16 02:44:10.440300 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-16 02:44:10.440310 | orchestrator | 2026-02-16 02:44:10.440321 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-16 02:44:10.440333 | orchestrator | Monday 16 February 2026 02:44:06 +0000 (0:00:05.174) 0:00:18.215 ******* 2026-02-16 02:44:10.440345 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-16 02:44:10.440358 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-16 02:44:10.440369 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-16 02:44:10.440380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-16 02:44:10.440391 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-16 02:44:10.440401 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-16 02:44:10.440412 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-16 02:44:10.440423 | orchestrator | 2026-02-16 02:44:10.440451 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-16 02:44:10.440468 | orchestrator | Monday 16 February 2026 02:44:06 +0000 (0:00:00.189) 0:00:18.405 ******* 2026-02-16 02:44:10.440521 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBRBBj+Oa/VK8N/YxdhyjYvnrllgYSDx3jwHW4cICogl) 2026-02-16 02:44:10.440571 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDaigbDcsPyD60cnH3XkzVyghg+NEzwF2UtymlMdPX4m4rPumZkmYcOgQHuC3iIyn5Q091IDOzvmeta2/aDOTL5DirHa++TabI2Npw6yBeSG0AmUIzEZ1OJH0rbOXLue6nnkowtFPAe6XfA004AF6E4TPvC6nGUPcHGkwKJtZznXtqz3DLM73yduZxKChUgsLvjkA9cPaVu2mmpkX7QZJaveNejID/hpQj/v4+XFLOSvjUlhJp42ANURUw1JKJ7b/H5DB5mmcpxY8opxilazhoXOgXTOSCECghChXCMfFlnzh3tZlJ5wos5hw5hkj8L4CZzwqN4NjcQSPhnhU+eTSDJosbZZl6x+UXZf7RxZJqPD/5XvQi+R7noGL6E08r8xjC7I3hcPDeF8k+TndWod7PSBL7Ee3Uhm6Ek4n2NGwgedrLTm3PtkicTL0j99dIed2a3bWBBJR1Sh/l0cd76dM4eM46kNZWDV0dxOFyjCO4yV6G09CCsM262gx/c2WA90Bs=) 2026-02-16 02:44:10.440593 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLcsg8deK86a6lkIYpLrlVoyp78ORvMrB7kRC60y7vAcNM9sR3J19rblpl7GmCmLA/gcHNq/cp8SJR2bBofJmXE=) 2026-02-16 02:44:10.440628 | orchestrator | 2026-02-16 02:44:10.440641 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-16 02:44:10.440654 | orchestrator | Monday 16 February 2026 02:44:07 +0000 (0:00:00.985) 0:00:19.391 ******* 2026-02-16 02:44:10.440666 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB2B3CpAjFxxsDMjlQpKKOcfvb473sHCPm4SXxe8Q1UZ) 2026-02-16 02:44:10.440679 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXqCmNL8BPRjHwhdznW/hJlH0Mqr5FVciky3wXRPDDt5aIgQafQYYj3qL/HJ8tK9AmN3ZNHWWdfZMq4Oo46tG3KBjs90QYQpGx+8KhBMnqktr5u1tLHHAalsNzoC8dY1AW4ao2uCM7l+Yfb2FiY4zg2nlkrI2qmCQ61RbkLYEEUk0B90LpKgnFL4rbJkk5mNj9d2xNcAIf4P3+tOcbm+2miN/itbOSBQxx8YZ97ekxpIjJ2Tp44SwO6NxsLgwMRd6chpH7RMGDeA80Y9eOFca45A5eGYgcZrgNiuj2VDgekvVmoXpHJb7Pn6bi9XCYFMAWg/7MzRwSteERIHpKqKjfwL4bVUG4o2g/CPlCL4wAsufLt5IaINhe3KlxJ2BBLtiKrlN4JwHhJUnFBLo7xwkqGJcE56ZYBxqwuWQ9eT4ZbTE1SEtqnM2ngXkVJnEWqbtgVOCYDIcAos5/75OkP63NzpyyR3mTBCb/FkAA2DB38Cgcw9hCLIBadC79LmX8SkU=) 2026-02-16 02:44:10.440692 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF9CmJPTi1A8huAgibZG64+6zG+QsGzp7ISy98JPZcTyqf0nnI5OobONffe+oAu83v/dcaKuRg7QNtWpQvng2vU=) 2026-02-16 02:44:10.440704 | orchestrator | 2026-02-16 02:44:10.440717 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-16 02:44:10.440729 | orchestrator | Monday 16 February 2026 02:44:08 +0000 (0:00:01.036) 0:00:20.427 ******* 2026-02-16 02:44:10.440741 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCc7wDd4xYlbABeKF4Rkm/E2e7hA4w9GUFGXJCwUJPRcnQwhg8a55rO1ZNOfBxOf1TERCPr62Jh+AowFOfKuNZv7pINTtv2RosDq9D9+pll3o2V4X9vTIqnHqHd5nb5asUzWVGdI1+Fz0tssNhDkXOi/+oMAhN6doAr6tELOkJ91zOn0DfI0jZwwT+NxpS/p71bRToACf42EG5kwQnfcvubtP8rdF45LK5ViAFJPq6PKGjpmM6tHO7rRLMHtdKIwtFhw61xggCbUOtvjuhHkENm+OQbJcWivMWD14fiPwUP5jWDffzgoYgMxlXnJv9b2LBMu5jLo60lZRez53YLlyaseCV7ChOwfTsO3M1u/7kjMVE34AtOrbe9k0MC3yHe74RwtZPv0HY8lGkd6xwe1MjZqb8u4cZWQStn6KGIn5EaJceLIv8PxOTmZ5Hnz2JkjHpxuO6XUK9IdJ9NJKewqIRbM0zKm4LXbqDS9Zt2yJT1VYPZV7Nt8eYV/+H4ArpIWdM=) 2026-02-16 02:44:10.440754 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLqGiI+ocEByAVZNpIbvwZUK8yB5PKXlO0q+9eHWVSfltJRxC1DFQah86m2fI3ptT8cuMPtDPbNMTdI74ri5BNA=) 2026-02-16 02:44:10.440766 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMN0NTN56GaXdFVYI+k+TBIXke1RnF3gbAIvhvXgUi6d) 2026-02-16 02:44:10.440778 | orchestrator | 2026-02-16 02:44:10.440790 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-16 02:44:10.440802 | orchestrator | Monday 16 February 2026 02:44:09 +0000 (0:00:01.043) 0:00:21.471 ******* 2026-02-16 02:44:10.440827 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpEWGTPr6v5IPwI2zegZgi14Eq0a1vV6h3L7uIxN5NT9NeMVqf3gJ2fUKxvY9Og06QOGckaLEyFu0d3SN6EsvvDq8Q0PDVecGVqTW+8mnjdxpvaoSKunpmtFwfnkZh3xzqVlwbzLLUg9uKKlXTrnAvU7Y20PtbewepHfwbeZTiCj/60TF1/oz9oM3m9pvqRR+Mw2z/d3SSvNsZ8Rl9GUtLcXr/EnpcDgoiUho2Xk5wtICaJuliLcffMI4rvmeSbhu2C0CH/eesKcY2o6NnrF+0+c9/gqQ0rPwy//SJf7LeCxseyNM2mEH5ItOypqzcJwxwUJEZ3cM9nkr5g0ybc05luF6jizK8mwPIBPK5nqt6mtl1ZwvkAlYMGt59cMrBPt9pq/GhUerTkWiCMk7ba9lPxtA44RM+QAw2d+teFAGH4BtSjKQf+TQZwCnp8u1IdgQFFl5GH0jaheGOXeg2z/wIDoIayMR6jRBl3Oc1Yhjmxiqd16FECSVVVzJvwNnblpM=) 2026-02-16 02:44:14.613810 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNc8+1w41g5cZUcl2IIxerCh9NdAOJfQYl21fYrvqFCKox49An5vxoYdTzaQoj9hmE5ANTs9aVSF/M1hg4R+Cuw=) 2026-02-16 02:44:14.613933 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKMi/27qokzslnsXe5GWSobsEHuKxmRvQfXr0ZRAtMYa) 2026-02-16 02:44:14.613987 | orchestrator | 2026-02-16 02:44:14.614007 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-16 02:44:14.614167 | orchestrator | Monday 16 February 2026 02:44:10 +0000 (0:00:01.020) 0:00:22.492 ******* 2026-02-16 02:44:14.614192 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDzD8WywV7Py01oaaaqSYWovhnkG8HKOSru1IfmkljEQbJgOPRXCUOPWIAOSlVTj3yJ8o37RXZ+0yxVd1npR6NnQQ2j/uK4SLys7TIdXKKwK2TUz5wAUBgnF/m1EMgqS/h1m0C5I49eiT06UFiAXBCJWrXRJpt+hQwANfy5y0oiQzXeNeQNg1MT+sa1BPP5me7Cutn68H/E7YZJfZ1YUPaHW50vO0exIuG5fZ9WXcW7y1NEzxWjBTar/LPqp6k0Lz0wcVA67GlZ8q4cffoxHSGag7KYOF2TwbnUOMFl7mNBg3Y33Id3Wfq1yi73x09jBGB9VRZA3sSOFp64Eppk2WFtOKhyF5XoPuhR6gu4PYyBAoc251w/kQSkkNWxIlCfwnVTrbCw6HXUgML8KgFhGB6w7L+u0bqcvEpHlxM94quPYhHMKyMsTHtUFnpf4Q9ldAvbYA7Xlo3AwaVpbxsJE0W8XQAOgZddhff7F5KFf+yy5iF2J+mWXQMx7NlZInf1lKk=) 2026-02-16 02:44:14.614212 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJeS6SSNv6Bh8WV+gjNvnszIbXpPpQ4Qw7+LAjjR1CINXu1jPiDzX3baoJmnn3EUs2aQaVBDFexTxWpuMzC7jnM=) 2026-02-16 02:44:14.614229 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE22XGwAnysTsrP5aqC4zMVRtEVQxTHkU7n5vZXltHkZ) 2026-02-16 02:44:14.614245 | orchestrator | 2026-02-16 02:44:14.614261 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-16 02:44:14.614276 | orchestrator | Monday 16 February 2026 02:44:11 +0000 (0:00:01.009) 0:00:23.501 ******* 2026-02-16 02:44:14.614292 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCm4Lg3qlE+diB/9RlLVNHpmjUhdmAMQD1x9cDguw6IoHZPPosE/AJG/nhl4Bg+aWqITqR3epQwsZn6Tpfe78Z5akkbW+MIl3Zzbp+FpgAByL9hwPaxUP1OX36sYliev34J4gXQTE32PrYSiFQ3W5VCQuh55QzC3eGVHmXlW2SMgN4/ky0B7LutKA3HMPaHHmcFN5b5tlx5mLKX5kTIMkE8oDTJWLUQvp4VnplswxhlJXq5Kvobjzsj+fpqoqpMrHDDhqmbZy5eDdbknRXk6tX7RQRukuqs0WxIbQmon0dJq/yHuXPhzQCd8QPjhHjqYji5B4ZSVnZ+MhcW7KO+R7np7eAtt9beFpqvy92wbfyXiiAh0NSPH4B71BYtqFj67rDE+tdsOuXchNSochgoiQUQvXQiOE7H2PsJU2iWlzZeE8DpP+UC2VqzEc/PqMqko898aKM1vVp9sD9euKg+ws6JV3gin7taGOpiy/bGgtJtAti7yZBy2ouv1fHjCG98ZZE=) 2026-02-16 02:44:14.614308 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAZDvYdyrOb//s4hYK+bF0GbN7bhgR/2itgbRee8BVViIEOWOOUOZok7Iuo5NgsaIzmug9IMfWZfP2z/kG5ND+4=) 2026-02-16 02:44:14.614324 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPwlZJGQP3sdJRbITi7S9iSn14Eu9TRqY+LmQ9fmzJdd) 2026-02-16 02:44:14.614341 | orchestrator | 2026-02-16 02:44:14.614358 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-16 02:44:14.614373 | orchestrator | Monday 16 February 2026 02:44:12 +0000 (0:00:01.007) 0:00:24.509 ******* 2026-02-16 02:44:14.614412 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCNccWFkdXigJdPOCO9HtDP0CS0oI0t0VNsm/TtI3cnKeQRH94YPK3S9ui5qIflmaJ4SXmb6mzqkq+IK6YRC3/mKdTkMqme8MXX86BRf7iiAyOQx03mCwGAkgo/cvtUP9NoGK2QsrbyeuRT2SgzWpcrqFuFhme2LkfQfqvn1NSWq8Ks4yRcdP6hrHL6uTPjW/55+NQ7yjShgq5XMIqswCHradtTC9wi0MTrdw+Uxo7vx+5IgsnNhsBqZw3GNBx9UdGMkeAdx9JkhvvzJgBptB9HU1q8VYIoH+eJwDp1HogTKbY4Fdh7T/XafBvLMglXpP/oi33q7kLYvgelFf7hC0IGi5mWhVvYd4C1uHG4vtCBlqgnj1TKZiBB8kM8BDY3PaO9MlTy+xToE3L02Sf0uvsGXSz/RQ24MdQLlSzQb4PbYB9tdLmzr8rpEZpGgG1w+atqqj1O/Ae8t4hLCC6GUdtsZuMkrQtzeLYvlGg9pfYgIOgRBY5GQ2Ex6AGc+b9Lhnk=) 2026-02-16 02:44:14.614432 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNZIkyP2Bx+kfFf3Gl/K6AEiT0sWNF2NO04TNpcY/JRWGv4p0jgZKPBsoDBgoTWpFHqh/YR4XFgozAQ5511DQlg=) 2026-02-16 02:44:14.614449 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIcsogeIKWPm3k+xvVy9mJBQ8cAyd6F7nOy+E4lgr4kH) 2026-02-16 02:44:14.614465 | orchestrator | 2026-02-16 02:44:14.614482 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-16 02:44:14.614557 | orchestrator | Monday 16 February 2026 02:44:13 +0000 (0:00:01.019) 0:00:25.529 ******* 2026-02-16 02:44:14.614579 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-16 02:44:14.614596 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-16 02:44:14.614638 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-16 02:44:14.614760 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-16 02:44:14.614787 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-16 02:44:14.614804 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-16 02:44:14.614819 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-16 02:44:14.614835 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:44:14.614852 | orchestrator | 2026-02-16 02:44:14.614869 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-16 02:44:14.614885 | orchestrator | Monday 16 February 2026 02:44:13 +0000 (0:00:00.170) 0:00:25.700 ******* 2026-02-16 02:44:14.614901 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:44:14.614912 | orchestrator | 2026-02-16 02:44:14.614921 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-16 02:44:14.614940 | orchestrator | Monday 16 February 2026 02:44:13 +0000 (0:00:00.050) 0:00:25.750 ******* 2026-02-16 02:44:14.614950 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:44:14.614960 | orchestrator | 2026-02-16 02:44:14.614969 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-16 02:44:14.614979 | orchestrator | Monday 16 February 2026 02:44:13 +0000 (0:00:00.047) 0:00:25.798 ******* 2026-02-16 02:44:14.614989 | orchestrator | changed: [testbed-manager] 2026-02-16 02:44:14.614998 | orchestrator | 2026-02-16 02:44:14.615008 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 02:44:14.615018 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-16 02:44:14.615028 | orchestrator | 2026-02-16 02:44:14.615038 | orchestrator | 2026-02-16 02:44:14.615048 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 02:44:14.615057 | orchestrator | Monday 16 February 2026 02:44:14 +0000 (0:00:00.684) 0:00:26.483 ******* 2026-02-16 02:44:14.615067 | orchestrator | =============================================================================== 2026-02-16 02:44:14.615076 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.59s 2026-02-16 02:44:14.615086 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.17s 2026-02-16 02:44:14.615096 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-02-16 02:44:14.615106 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-02-16 02:44:14.615116 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-02-16 02:44:14.615125 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-02-16 02:44:14.615135 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-02-16 02:44:14.615144 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-02-16 02:44:14.615154 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-02-16 02:44:14.615164 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-02-16 02:44:14.615173 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-02-16 02:44:14.615183 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-02-16 02:44:14.615193 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-02-16 02:44:14.615202 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-02-16 02:44:14.615223 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-02-16 02:44:14.615233 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-02-16 02:44:14.615242 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.68s 2026-02-16 02:44:14.615254 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2026-02-16 02:44:14.615271 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-02-16 02:44:14.615288 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2026-02-16 02:44:14.879927 | orchestrator | + osism apply squid 2026-02-16 02:44:26.825844 | orchestrator | 2026-02-16 02:44:26 | INFO  | Task a1127863-b564-401e-87db-8e7349c339cc (squid) was prepared for execution. 2026-02-16 02:44:26.825940 | orchestrator | 2026-02-16 02:44:26 | INFO  | It takes a moment until task a1127863-b564-401e-87db-8e7349c339cc (squid) has been started and output is visible here. 2026-02-16 02:46:19.440900 | orchestrator | 2026-02-16 02:46:19.441007 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-16 02:46:19.441023 | orchestrator | 2026-02-16 02:46:19.441036 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-16 02:46:19.441048 | orchestrator | Monday 16 February 2026 02:44:30 +0000 (0:00:00.158) 0:00:00.158 ******* 2026-02-16 02:46:19.441059 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-16 02:46:19.441073 | orchestrator | 2026-02-16 02:46:19.441084 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-16 02:46:19.441096 | orchestrator | Monday 16 February 2026 02:44:30 +0000 (0:00:00.082) 0:00:00.241 ******* 2026-02-16 02:46:19.441107 | orchestrator | ok: [testbed-manager] 2026-02-16 02:46:19.441118 | orchestrator | 2026-02-16 02:46:19.441126 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-16 02:46:19.441133 | orchestrator | Monday 16 February 2026 02:44:32 +0000 (0:00:01.361) 0:00:01.602 ******* 2026-02-16 02:46:19.441140 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-16 02:46:19.441147 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-16 02:46:19.441154 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-16 02:46:19.441161 | orchestrator | 2026-02-16 02:46:19.441168 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-16 02:46:19.441175 | orchestrator | Monday 16 February 2026 02:44:33 +0000 (0:00:01.109) 0:00:02.712 ******* 2026-02-16 02:46:19.441182 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-16 02:46:19.441189 | orchestrator | 2026-02-16 02:46:19.441196 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-16 02:46:19.441202 | orchestrator | Monday 16 February 2026 02:44:34 +0000 (0:00:01.025) 0:00:03.737 ******* 2026-02-16 02:46:19.441209 | orchestrator | ok: [testbed-manager] 2026-02-16 02:46:19.441216 | orchestrator | 2026-02-16 02:46:19.441223 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-16 02:46:19.441230 | orchestrator | Monday 16 February 2026 02:44:34 +0000 (0:00:00.365) 0:00:04.103 ******* 2026-02-16 02:46:19.441238 | orchestrator | changed: [testbed-manager] 2026-02-16 02:46:19.441244 | orchestrator | 2026-02-16 02:46:19.441251 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-16 02:46:19.441258 | orchestrator | Monday 16 February 2026 02:44:35 +0000 (0:00:00.898) 0:00:05.002 ******* 2026-02-16 02:46:19.441265 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-16 02:46:19.441276 | orchestrator | ok: [testbed-manager] 2026-02-16 02:46:19.441282 | orchestrator | 2026-02-16 02:46:19.441289 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-16 02:46:19.441312 | orchestrator | Monday 16 February 2026 02:45:06 +0000 (0:00:30.680) 0:00:35.682 ******* 2026-02-16 02:46:19.441320 | orchestrator | changed: [testbed-manager] 2026-02-16 02:46:19.441326 | orchestrator | 2026-02-16 02:46:19.441333 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-16 02:46:19.441340 | orchestrator | Monday 16 February 2026 02:45:18 +0000 (0:00:11.981) 0:00:47.664 ******* 2026-02-16 02:46:19.441347 | orchestrator | Pausing for 60 seconds 2026-02-16 02:46:19.441356 | orchestrator | changed: [testbed-manager] 2026-02-16 02:46:19.441367 | orchestrator | 2026-02-16 02:46:19.441378 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-16 02:46:19.441390 | orchestrator | Monday 16 February 2026 02:46:18 +0000 (0:01:00.089) 0:01:47.754 ******* 2026-02-16 02:46:19.441401 | orchestrator | ok: [testbed-manager] 2026-02-16 02:46:19.441411 | orchestrator | 2026-02-16 02:46:19.441420 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-16 02:46:19.441429 | orchestrator | Monday 16 February 2026 02:46:18 +0000 (0:00:00.063) 0:01:47.817 ******* 2026-02-16 02:46:19.441441 | orchestrator | changed: [testbed-manager] 2026-02-16 02:46:19.441450 | orchestrator | 2026-02-16 02:46:19.441461 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 02:46:19.441471 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 02:46:19.441481 | orchestrator | 2026-02-16 02:46:19.441490 | orchestrator | 2026-02-16 02:46:19.441500 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 02:46:19.441509 | orchestrator | Monday 16 February 2026 02:46:19 +0000 (0:00:00.633) 0:01:48.451 ******* 2026-02-16 02:46:19.441518 | orchestrator | =============================================================================== 2026-02-16 02:46:19.441540 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-02-16 02:46:19.441550 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.68s 2026-02-16 02:46:19.441561 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.98s 2026-02-16 02:46:19.441571 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.36s 2026-02-16 02:46:19.441582 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.11s 2026-02-16 02:46:19.441593 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.03s 2026-02-16 02:46:19.441603 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.90s 2026-02-16 02:46:19.441613 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.63s 2026-02-16 02:46:19.441623 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2026-02-16 02:46:19.441634 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-02-16 02:46:19.441646 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-02-16 02:46:19.708556 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-16 02:46:19.708728 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-16 02:46:19.758626 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-16 02:46:19.758714 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-16 02:46:19.765220 | orchestrator | + set -e 2026-02-16 02:46:19.765257 | orchestrator | + NAMESPACE=kolla/release 2026-02-16 02:46:19.765270 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-16 02:46:19.769541 | orchestrator | ++ semver 9.5.0 9.0.0 2026-02-16 02:46:19.837490 | orchestrator | + [[ 1 -lt 0 ]] 2026-02-16 02:46:19.838098 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-16 02:46:31.835358 | orchestrator | 2026-02-16 02:46:31 | INFO  | Task e20b1bbd-75ed-4298-9e87-87e8850c51c2 (operator) was prepared for execution. 2026-02-16 02:46:31.835452 | orchestrator | 2026-02-16 02:46:31 | INFO  | It takes a moment until task e20b1bbd-75ed-4298-9e87-87e8850c51c2 (operator) has been started and output is visible here. 2026-02-16 02:46:47.162983 | orchestrator | 2026-02-16 02:46:47.163085 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-16 02:46:47.163096 | orchestrator | 2026-02-16 02:46:47.163104 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-16 02:46:47.163111 | orchestrator | Monday 16 February 2026 02:46:35 +0000 (0:00:00.101) 0:00:00.101 ******* 2026-02-16 02:46:47.163117 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:46:47.163124 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:46:47.163131 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:46:47.163137 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:46:47.163143 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:46:47.163150 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:46:47.163156 | orchestrator | 2026-02-16 02:46:47.163164 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-16 02:46:47.163171 | orchestrator | Monday 16 February 2026 02:46:38 +0000 (0:00:03.245) 0:00:03.347 ******* 2026-02-16 02:46:47.163179 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:46:47.163186 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:46:47.163193 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:46:47.163214 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:46:47.163221 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:46:47.163227 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:46:47.163233 | orchestrator | 2026-02-16 02:46:47.163240 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-16 02:46:47.163246 | orchestrator | 2026-02-16 02:46:47.163252 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-16 02:46:47.163258 | orchestrator | Monday 16 February 2026 02:46:39 +0000 (0:00:00.728) 0:00:04.075 ******* 2026-02-16 02:46:47.163264 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:46:47.163271 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:46:47.163277 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:46:47.163283 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:46:47.163289 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:46:47.163296 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:46:47.163304 | orchestrator | 2026-02-16 02:46:47.163310 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-16 02:46:47.163316 | orchestrator | Monday 16 February 2026 02:46:39 +0000 (0:00:00.162) 0:00:04.238 ******* 2026-02-16 02:46:47.163322 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:46:47.163328 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:46:47.163333 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:46:47.163339 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:46:47.163345 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:46:47.163351 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:46:47.163358 | orchestrator | 2026-02-16 02:46:47.163365 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-16 02:46:47.163372 | orchestrator | Monday 16 February 2026 02:46:39 +0000 (0:00:00.165) 0:00:04.404 ******* 2026-02-16 02:46:47.163378 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:46:47.163386 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:46:47.163392 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:46:47.163399 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:46:47.163406 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:46:47.163412 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:46:47.163419 | orchestrator | 2026-02-16 02:46:47.163425 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-16 02:46:47.163432 | orchestrator | Monday 16 February 2026 02:46:40 +0000 (0:00:00.681) 0:00:05.086 ******* 2026-02-16 02:46:47.163439 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:46:47.163446 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:46:47.163453 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:46:47.163459 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:46:47.163466 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:46:47.163473 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:46:47.163501 | orchestrator | 2026-02-16 02:46:47.163508 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-16 02:46:47.163515 | orchestrator | Monday 16 February 2026 02:46:41 +0000 (0:00:00.795) 0:00:05.881 ******* 2026-02-16 02:46:47.163522 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-16 02:46:47.163529 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-16 02:46:47.163536 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-16 02:46:47.163543 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-16 02:46:47.163555 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-16 02:46:47.163567 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-16 02:46:47.163581 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-16 02:46:47.163592 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-16 02:46:47.163600 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-16 02:46:47.163608 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-16 02:46:47.163615 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-16 02:46:47.163623 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-16 02:46:47.163630 | orchestrator | 2026-02-16 02:46:47.163638 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-16 02:46:47.163646 | orchestrator | Monday 16 February 2026 02:46:42 +0000 (0:00:01.229) 0:00:07.111 ******* 2026-02-16 02:46:47.163653 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:46:47.163661 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:46:47.163669 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:46:47.163675 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:46:47.163683 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:46:47.163690 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:46:47.163698 | orchestrator | 2026-02-16 02:46:47.163706 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-16 02:46:47.163714 | orchestrator | Monday 16 February 2026 02:46:43 +0000 (0:00:01.223) 0:00:08.335 ******* 2026-02-16 02:46:47.163722 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-16 02:46:47.163729 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-16 02:46:47.163736 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-16 02:46:47.163744 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-16 02:46:47.163767 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-16 02:46:47.163774 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-16 02:46:47.163780 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-16 02:46:47.163786 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-16 02:46:47.163793 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-16 02:46:47.163803 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-16 02:46:47.163815 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-16 02:46:47.163828 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-16 02:46:47.163841 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-16 02:46:47.163849 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-16 02:46:47.163856 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-16 02:46:47.163864 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-16 02:46:47.163872 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-16 02:46:47.163880 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-16 02:46:47.163888 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-16 02:46:47.163896 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-16 02:46:47.163911 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-16 02:46:47.163918 | orchestrator | 2026-02-16 02:46:47.163947 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-16 02:46:47.163956 | orchestrator | Monday 16 February 2026 02:46:45 +0000 (0:00:01.290) 0:00:09.625 ******* 2026-02-16 02:46:47.163963 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:46:47.163970 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:46:47.163976 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:46:47.163983 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:46:47.163990 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:46:47.163997 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:46:47.164003 | orchestrator | 2026-02-16 02:46:47.164010 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-16 02:46:47.164017 | orchestrator | Monday 16 February 2026 02:46:45 +0000 (0:00:00.164) 0:00:09.790 ******* 2026-02-16 02:46:47.164024 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:46:47.164030 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:46:47.164037 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:46:47.164043 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:46:47.164049 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:46:47.164056 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:46:47.164063 | orchestrator | 2026-02-16 02:46:47.164070 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-16 02:46:47.164077 | orchestrator | Monday 16 February 2026 02:46:45 +0000 (0:00:00.185) 0:00:09.975 ******* 2026-02-16 02:46:47.164083 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:46:47.164090 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:46:47.164097 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:46:47.164104 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:46:47.164110 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:46:47.164117 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:46:47.164124 | orchestrator | 2026-02-16 02:46:47.164131 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-16 02:46:47.164138 | orchestrator | Monday 16 February 2026 02:46:45 +0000 (0:00:00.602) 0:00:10.577 ******* 2026-02-16 02:46:47.164145 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:46:47.164152 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:46:47.164159 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:46:47.164165 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:46:47.164182 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:46:47.164189 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:46:47.164196 | orchestrator | 2026-02-16 02:46:47.164203 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-16 02:46:47.164209 | orchestrator | Monday 16 February 2026 02:46:46 +0000 (0:00:00.164) 0:00:10.742 ******* 2026-02-16 02:46:47.164216 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-16 02:46:47.164222 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-16 02:46:47.164228 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-16 02:46:47.164235 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:46:47.164242 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:46:47.164249 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-16 02:46:47.164256 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:46:47.164263 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:46:47.164269 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-16 02:46:47.164276 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:46:47.164283 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-16 02:46:47.164290 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:46:47.164297 | orchestrator | 2026-02-16 02:46:47.164304 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-16 02:46:47.164311 | orchestrator | Monday 16 February 2026 02:46:46 +0000 (0:00:00.710) 0:00:11.452 ******* 2026-02-16 02:46:47.164324 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:46:47.164331 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:46:47.164338 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:46:47.164345 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:46:47.164352 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:46:47.164359 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:46:47.164366 | orchestrator | 2026-02-16 02:46:47.164373 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-16 02:46:47.164379 | orchestrator | Monday 16 February 2026 02:46:47 +0000 (0:00:00.145) 0:00:11.598 ******* 2026-02-16 02:46:47.164386 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:46:47.164392 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:46:47.164399 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:46:47.164405 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:46:47.164419 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:46:48.425296 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:46:48.425395 | orchestrator | 2026-02-16 02:46:48.425412 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-16 02:46:48.425426 | orchestrator | Monday 16 February 2026 02:46:47 +0000 (0:00:00.130) 0:00:11.729 ******* 2026-02-16 02:46:48.425438 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:46:48.425449 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:46:48.425460 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:46:48.425472 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:46:48.425483 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:46:48.425494 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:46:48.425504 | orchestrator | 2026-02-16 02:46:48.425516 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-16 02:46:48.425527 | orchestrator | Monday 16 February 2026 02:46:47 +0000 (0:00:00.144) 0:00:11.873 ******* 2026-02-16 02:46:48.425538 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:46:48.425549 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:46:48.425576 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:46:48.425588 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:46:48.425599 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:46:48.425610 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:46:48.425620 | orchestrator | 2026-02-16 02:46:48.425631 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-16 02:46:48.425642 | orchestrator | Monday 16 February 2026 02:46:47 +0000 (0:00:00.669) 0:00:12.543 ******* 2026-02-16 02:46:48.425653 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:46:48.425664 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:46:48.425675 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:46:48.425686 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:46:48.425697 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:46:48.425708 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:46:48.425718 | orchestrator | 2026-02-16 02:46:48.425730 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 02:46:48.425742 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-16 02:46:48.425754 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-16 02:46:48.425765 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-16 02:46:48.425776 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-16 02:46:48.425787 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-16 02:46:48.425820 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-16 02:46:48.425832 | orchestrator | 2026-02-16 02:46:48.425845 | orchestrator | 2026-02-16 02:46:48.425858 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 02:46:48.425871 | orchestrator | Monday 16 February 2026 02:46:48 +0000 (0:00:00.219) 0:00:12.762 ******* 2026-02-16 02:46:48.425883 | orchestrator | =============================================================================== 2026-02-16 02:46:48.425895 | orchestrator | Gathering Facts --------------------------------------------------------- 3.25s 2026-02-16 02:46:48.425908 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.29s 2026-02-16 02:46:48.425921 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.23s 2026-02-16 02:46:48.425962 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.22s 2026-02-16 02:46:48.425974 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2026-02-16 02:46:48.425986 | orchestrator | Do not require tty for all users ---------------------------------------- 0.73s 2026-02-16 02:46:48.425998 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2026-02-16 02:46:48.426010 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.68s 2026-02-16 02:46:48.426074 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2026-02-16 02:46:48.426086 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.60s 2026-02-16 02:46:48.426096 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2026-02-16 02:46:48.426107 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-02-16 02:46:48.426118 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2026-02-16 02:46:48.426129 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2026-02-16 02:46:48.426139 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2026-02-16 02:46:48.426150 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2026-02-16 02:46:48.426161 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-02-16 02:46:48.426172 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2026-02-16 02:46:48.426182 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2026-02-16 02:46:48.741411 | orchestrator | + osism apply --environment custom facts 2026-02-16 02:46:50.591382 | orchestrator | 2026-02-16 02:46:50 | INFO  | Trying to run play facts in environment custom 2026-02-16 02:47:00.735857 | orchestrator | 2026-02-16 02:47:00 | INFO  | Task 18415ca7-0f68-4816-b1bc-ced6e76a9687 (facts) was prepared for execution. 2026-02-16 02:47:00.736023 | orchestrator | 2026-02-16 02:47:00 | INFO  | It takes a moment until task 18415ca7-0f68-4816-b1bc-ced6e76a9687 (facts) has been started and output is visible here. 2026-02-16 02:47:45.756706 | orchestrator | 2026-02-16 02:47:45.756805 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-16 02:47:45.756821 | orchestrator | 2026-02-16 02:47:45.756833 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-16 02:47:45.756844 | orchestrator | Monday 16 February 2026 02:47:04 +0000 (0:00:00.083) 0:00:00.083 ******* 2026-02-16 02:47:45.756855 | orchestrator | ok: [testbed-manager] 2026-02-16 02:47:45.756865 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:47:45.756875 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:47:45.756884 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:47:45.756893 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:47:45.756902 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:47:45.756927 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:47:45.756936 | orchestrator | 2026-02-16 02:47:45.756946 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-16 02:47:45.756955 | orchestrator | Monday 16 February 2026 02:47:06 +0000 (0:00:01.419) 0:00:01.503 ******* 2026-02-16 02:47:45.756964 | orchestrator | ok: [testbed-manager] 2026-02-16 02:47:45.756973 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:47:45.756982 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:47:45.756991 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:47:45.756999 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:47:45.757008 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:47:45.757017 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:47:45.757026 | orchestrator | 2026-02-16 02:47:45.757035 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-16 02:47:45.757044 | orchestrator | 2026-02-16 02:47:45.757053 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-16 02:47:45.757062 | orchestrator | Monday 16 February 2026 02:47:07 +0000 (0:00:01.174) 0:00:02.677 ******* 2026-02-16 02:47:45.757095 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:47:45.757105 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:47:45.757114 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:47:45.757122 | orchestrator | 2026-02-16 02:47:45.757131 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-16 02:47:45.757141 | orchestrator | Monday 16 February 2026 02:47:07 +0000 (0:00:00.107) 0:00:02.785 ******* 2026-02-16 02:47:45.757149 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:47:45.757158 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:47:45.757166 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:47:45.757174 | orchestrator | 2026-02-16 02:47:45.757183 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-16 02:47:45.757192 | orchestrator | Monday 16 February 2026 02:47:07 +0000 (0:00:00.227) 0:00:03.012 ******* 2026-02-16 02:47:45.757200 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:47:45.757209 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:47:45.757218 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:47:45.757232 | orchestrator | 2026-02-16 02:47:45.757247 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-16 02:47:45.757263 | orchestrator | Monday 16 February 2026 02:47:07 +0000 (0:00:00.241) 0:00:03.254 ******* 2026-02-16 02:47:45.757276 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 02:47:45.757288 | orchestrator | 2026-02-16 02:47:45.757302 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-16 02:47:45.757311 | orchestrator | Monday 16 February 2026 02:47:08 +0000 (0:00:00.152) 0:00:03.406 ******* 2026-02-16 02:47:45.757323 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:47:45.757336 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:47:45.757345 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:47:45.757353 | orchestrator | 2026-02-16 02:47:45.757362 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-16 02:47:45.757370 | orchestrator | Monday 16 February 2026 02:47:08 +0000 (0:00:00.448) 0:00:03.855 ******* 2026-02-16 02:47:45.757379 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:47:45.757387 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:47:45.757396 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:47:45.757405 | orchestrator | 2026-02-16 02:47:45.757413 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-16 02:47:45.757422 | orchestrator | Monday 16 February 2026 02:47:08 +0000 (0:00:00.131) 0:00:03.987 ******* 2026-02-16 02:47:45.757430 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:47:45.757439 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:47:45.757447 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:47:45.757456 | orchestrator | 2026-02-16 02:47:45.757465 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-16 02:47:45.757479 | orchestrator | Monday 16 February 2026 02:47:09 +0000 (0:00:01.027) 0:00:05.014 ******* 2026-02-16 02:47:45.757488 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:47:45.757497 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:47:45.757505 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:47:45.757513 | orchestrator | 2026-02-16 02:47:45.757522 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-16 02:47:45.757558 | orchestrator | Monday 16 February 2026 02:47:10 +0000 (0:00:00.504) 0:00:05.519 ******* 2026-02-16 02:47:45.757568 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:47:45.757577 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:47:45.757585 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:47:45.757594 | orchestrator | 2026-02-16 02:47:45.757602 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-16 02:47:45.757611 | orchestrator | Monday 16 February 2026 02:47:11 +0000 (0:00:01.024) 0:00:06.543 ******* 2026-02-16 02:47:45.757619 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:47:45.757628 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:47:45.757636 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:47:45.757645 | orchestrator | 2026-02-16 02:47:45.757653 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-16 02:47:45.757662 | orchestrator | Monday 16 February 2026 02:47:27 +0000 (0:00:16.252) 0:00:22.796 ******* 2026-02-16 02:47:45.757670 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:47:45.757679 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:47:45.757687 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:47:45.757696 | orchestrator | 2026-02-16 02:47:45.757705 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-16 02:47:45.757728 | orchestrator | Monday 16 February 2026 02:47:27 +0000 (0:00:00.084) 0:00:22.881 ******* 2026-02-16 02:47:45.757738 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:47:45.757746 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:47:45.757755 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:47:45.757763 | orchestrator | 2026-02-16 02:47:45.757775 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-16 02:47:45.757784 | orchestrator | Monday 16 February 2026 02:47:35 +0000 (0:00:08.189) 0:00:31.070 ******* 2026-02-16 02:47:45.757793 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:47:45.757801 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:47:45.757810 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:47:45.757818 | orchestrator | 2026-02-16 02:47:45.757827 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-16 02:47:45.757836 | orchestrator | Monday 16 February 2026 02:47:36 +0000 (0:00:00.476) 0:00:31.547 ******* 2026-02-16 02:47:45.757844 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-16 02:47:45.757853 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-16 02:47:45.757862 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-16 02:47:45.757870 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-16 02:47:45.757879 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-16 02:47:45.757887 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-16 02:47:45.757895 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-16 02:47:45.757904 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-16 02:47:45.757912 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-16 02:47:45.757921 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-16 02:47:45.757929 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-16 02:47:45.757938 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-16 02:47:45.757946 | orchestrator | 2026-02-16 02:47:45.757955 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-16 02:47:45.757968 | orchestrator | Monday 16 February 2026 02:47:39 +0000 (0:00:03.520) 0:00:35.068 ******* 2026-02-16 02:47:45.757977 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:47:45.757986 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:47:45.757994 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:47:45.758003 | orchestrator | 2026-02-16 02:47:45.758011 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-16 02:47:45.758095 | orchestrator | 2026-02-16 02:47:45.758106 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-16 02:47:45.758114 | orchestrator | Monday 16 February 2026 02:47:42 +0000 (0:00:02.253) 0:00:37.321 ******* 2026-02-16 02:47:45.758123 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:47:45.758132 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:47:45.758140 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:47:45.758149 | orchestrator | ok: [testbed-manager] 2026-02-16 02:47:45.758157 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:47:45.758166 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:47:45.758174 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:47:45.758182 | orchestrator | 2026-02-16 02:47:45.758191 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 02:47:45.758200 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 02:47:45.758209 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 02:47:45.758219 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 02:47:45.758228 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 02:47:45.758236 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 02:47:45.758245 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 02:47:45.758254 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 02:47:45.758262 | orchestrator | 2026-02-16 02:47:45.758271 | orchestrator | 2026-02-16 02:47:45.758280 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 02:47:45.758288 | orchestrator | Monday 16 February 2026 02:47:45 +0000 (0:00:03.726) 0:00:41.048 ******* 2026-02-16 02:47:45.758297 | orchestrator | =============================================================================== 2026-02-16 02:47:45.758305 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.25s 2026-02-16 02:47:45.758314 | orchestrator | Install required packages (Debian) -------------------------------------- 8.19s 2026-02-16 02:47:45.758323 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.73s 2026-02-16 02:47:45.758331 | orchestrator | Copy fact files --------------------------------------------------------- 3.52s 2026-02-16 02:47:45.758340 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 2.25s 2026-02-16 02:47:45.758348 | orchestrator | Create custom facts directory ------------------------------------------- 1.42s 2026-02-16 02:47:45.758364 | orchestrator | Copy fact file ---------------------------------------------------------- 1.17s 2026-02-16 02:47:45.958480 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2026-02-16 02:47:45.958585 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.02s 2026-02-16 02:47:45.958618 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.50s 2026-02-16 02:47:46.047785 | orchestrator | Create custom facts directory ------------------------------------------- 0.48s 2026-02-16 02:47:46.047850 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2026-02-16 02:47:46.047863 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.24s 2026-02-16 02:47:46.047874 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.23s 2026-02-16 02:47:46.047885 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-02-16 02:47:46.047897 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2026-02-16 02:47:46.047907 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-02-16 02:47:46.047918 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2026-02-16 02:47:46.220542 | orchestrator | + osism apply bootstrap 2026-02-16 02:47:58.296515 | orchestrator | 2026-02-16 02:47:58 | INFO  | Task a16c9cc9-98b2-4844-899a-8c0677c6eb39 (bootstrap) was prepared for execution. 2026-02-16 02:47:58.296628 | orchestrator | 2026-02-16 02:47:58 | INFO  | It takes a moment until task a16c9cc9-98b2-4844-899a-8c0677c6eb39 (bootstrap) has been started and output is visible here. 2026-02-16 02:48:14.039051 | orchestrator | 2026-02-16 02:48:14.039236 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-16 02:48:14.039267 | orchestrator | 2026-02-16 02:48:14.039288 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-16 02:48:14.039308 | orchestrator | Monday 16 February 2026 02:48:02 +0000 (0:00:00.148) 0:00:00.148 ******* 2026-02-16 02:48:14.039327 | orchestrator | ok: [testbed-manager] 2026-02-16 02:48:14.039344 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:48:14.039355 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:48:14.039366 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:48:14.039377 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:48:14.039388 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:48:14.039399 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:48:14.039410 | orchestrator | 2026-02-16 02:48:14.039421 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-16 02:48:14.039432 | orchestrator | 2026-02-16 02:48:14.039443 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-16 02:48:14.039454 | orchestrator | Monday 16 February 2026 02:48:02 +0000 (0:00:00.238) 0:00:00.387 ******* 2026-02-16 02:48:14.039465 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:48:14.039476 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:48:14.039487 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:48:14.039498 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:48:14.039514 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:48:14.039532 | orchestrator | ok: [testbed-manager] 2026-02-16 02:48:14.039549 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:48:14.039566 | orchestrator | 2026-02-16 02:48:14.039583 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-16 02:48:14.039602 | orchestrator | 2026-02-16 02:48:14.039621 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-16 02:48:14.039642 | orchestrator | Monday 16 February 2026 02:48:06 +0000 (0:00:03.730) 0:00:04.117 ******* 2026-02-16 02:48:14.039662 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-16 02:48:14.039682 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-16 02:48:14.039695 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-16 02:48:14.039707 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-16 02:48:14.039720 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-16 02:48:14.039732 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-16 02:48:14.039745 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-16 02:48:14.039757 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-16 02:48:14.039771 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-16 02:48:14.039808 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-16 02:48:14.039822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 02:48:14.039835 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-16 02:48:14.039847 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-16 02:48:14.039860 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-16 02:48:14.039873 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-16 02:48:14.039886 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 02:48:14.039899 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-16 02:48:14.039911 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-16 02:48:14.039924 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:48:14.039937 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-16 02:48:14.039949 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-16 02:48:14.039960 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-16 02:48:14.039971 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:48:14.039982 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 02:48:14.039993 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-16 02:48:14.040003 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-16 02:48:14.040014 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-16 02:48:14.040025 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-16 02:48:14.040036 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-16 02:48:14.040047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-16 02:48:14.040057 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-16 02:48:14.040068 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-16 02:48:14.040079 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-16 02:48:14.040090 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-16 02:48:14.040100 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-16 02:48:14.040111 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-16 02:48:14.040121 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-16 02:48:14.040132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-16 02:48:14.040169 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:48:14.040191 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-16 02:48:14.040210 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-16 02:48:14.040229 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-16 02:48:14.040241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-16 02:48:14.040251 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-16 02:48:14.040262 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:48:14.040273 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-16 02:48:14.040311 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-16 02:48:14.040331 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-16 02:48:14.040348 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-16 02:48:14.040366 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:48:14.040407 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-16 02:48:14.040426 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-16 02:48:14.040445 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:48:14.040460 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-16 02:48:14.040494 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-16 02:48:14.040513 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:48:14.040529 | orchestrator | 2026-02-16 02:48:14.040547 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-16 02:48:14.040563 | orchestrator | 2026-02-16 02:48:14.040581 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-16 02:48:14.040600 | orchestrator | Monday 16 February 2026 02:48:06 +0000 (0:00:00.462) 0:00:04.580 ******* 2026-02-16 02:48:14.040620 | orchestrator | ok: [testbed-manager] 2026-02-16 02:48:14.040639 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:48:14.040657 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:48:14.040675 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:48:14.040694 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:48:14.040713 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:48:14.040730 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:48:14.040749 | orchestrator | 2026-02-16 02:48:14.040761 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-16 02:48:14.040773 | orchestrator | Monday 16 February 2026 02:48:08 +0000 (0:00:01.204) 0:00:05.784 ******* 2026-02-16 02:48:14.040784 | orchestrator | ok: [testbed-manager] 2026-02-16 02:48:14.040795 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:48:14.040805 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:48:14.040816 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:48:14.040827 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:48:14.040838 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:48:14.040848 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:48:14.040859 | orchestrator | 2026-02-16 02:48:14.040870 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-16 02:48:14.040881 | orchestrator | Monday 16 February 2026 02:48:09 +0000 (0:00:01.146) 0:00:06.931 ******* 2026-02-16 02:48:14.040894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:48:14.040907 | orchestrator | 2026-02-16 02:48:14.040918 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-16 02:48:14.040930 | orchestrator | Monday 16 February 2026 02:48:09 +0000 (0:00:00.257) 0:00:07.188 ******* 2026-02-16 02:48:14.040940 | orchestrator | changed: [testbed-manager] 2026-02-16 02:48:14.040951 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:48:14.040962 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:48:14.040973 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:48:14.040984 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:48:14.040995 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:48:14.041012 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:48:14.041030 | orchestrator | 2026-02-16 02:48:14.041049 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-16 02:48:14.041067 | orchestrator | Monday 16 February 2026 02:48:11 +0000 (0:00:02.033) 0:00:09.222 ******* 2026-02-16 02:48:14.041086 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:48:14.041106 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:48:14.041126 | orchestrator | 2026-02-16 02:48:14.041194 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-16 02:48:14.041218 | orchestrator | Monday 16 February 2026 02:48:11 +0000 (0:00:00.258) 0:00:09.480 ******* 2026-02-16 02:48:14.041237 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:48:14.041257 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:48:14.041275 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:48:14.041294 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:48:14.041307 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:48:14.041318 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:48:14.041341 | orchestrator | 2026-02-16 02:48:14.041360 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-16 02:48:14.041371 | orchestrator | Monday 16 February 2026 02:48:12 +0000 (0:00:01.052) 0:00:10.533 ******* 2026-02-16 02:48:14.041382 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:48:14.041393 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:48:14.041404 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:48:14.041414 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:48:14.041425 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:48:14.041436 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:48:14.041447 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:48:14.041457 | orchestrator | 2026-02-16 02:48:14.041468 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-16 02:48:14.041479 | orchestrator | Monday 16 February 2026 02:48:13 +0000 (0:00:00.631) 0:00:11.164 ******* 2026-02-16 02:48:14.041490 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:48:14.041501 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:48:14.041512 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:48:14.041523 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:48:14.041537 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:48:14.041556 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:48:14.041574 | orchestrator | ok: [testbed-manager] 2026-02-16 02:48:14.041592 | orchestrator | 2026-02-16 02:48:14.041611 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-16 02:48:14.041629 | orchestrator | Monday 16 February 2026 02:48:13 +0000 (0:00:00.414) 0:00:11.578 ******* 2026-02-16 02:48:14.041647 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:48:14.041662 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:48:14.041696 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:48:25.981655 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:48:25.981763 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:48:25.981780 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:48:25.981792 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:48:25.981803 | orchestrator | 2026-02-16 02:48:25.981816 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-16 02:48:25.981830 | orchestrator | Monday 16 February 2026 02:48:14 +0000 (0:00:00.222) 0:00:11.801 ******* 2026-02-16 02:48:25.981843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:48:25.981872 | orchestrator | 2026-02-16 02:48:25.981883 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-16 02:48:25.981895 | orchestrator | Monday 16 February 2026 02:48:14 +0000 (0:00:00.280) 0:00:12.081 ******* 2026-02-16 02:48:25.981906 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:48:25.981917 | orchestrator | 2026-02-16 02:48:25.981928 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-16 02:48:25.981939 | orchestrator | Monday 16 February 2026 02:48:14 +0000 (0:00:00.284) 0:00:12.366 ******* 2026-02-16 02:48:25.981950 | orchestrator | ok: [testbed-manager] 2026-02-16 02:48:25.981962 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:48:25.981973 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:48:25.981984 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:48:25.981995 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:48:25.982006 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:48:25.982073 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:48:25.982096 | orchestrator | 2026-02-16 02:48:25.982107 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-16 02:48:25.982118 | orchestrator | Monday 16 February 2026 02:48:16 +0000 (0:00:01.470) 0:00:13.836 ******* 2026-02-16 02:48:25.982153 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:48:25.982165 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:48:25.982201 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:48:25.982214 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:48:25.982226 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:48:25.982238 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:48:25.982250 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:48:25.982262 | orchestrator | 2026-02-16 02:48:25.982274 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-16 02:48:25.982287 | orchestrator | Monday 16 February 2026 02:48:16 +0000 (0:00:00.213) 0:00:14.050 ******* 2026-02-16 02:48:25.982299 | orchestrator | ok: [testbed-manager] 2026-02-16 02:48:25.982312 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:48:25.982324 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:48:25.982336 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:48:25.982348 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:48:25.982360 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:48:25.982372 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:48:25.982384 | orchestrator | 2026-02-16 02:48:25.982397 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-16 02:48:25.982409 | orchestrator | Monday 16 February 2026 02:48:16 +0000 (0:00:00.544) 0:00:14.594 ******* 2026-02-16 02:48:25.982421 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:48:25.982433 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:48:25.982446 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:48:25.982458 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:48:25.982470 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:48:25.982482 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:48:25.982494 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:48:25.982506 | orchestrator | 2026-02-16 02:48:25.982518 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-16 02:48:25.982532 | orchestrator | Monday 16 February 2026 02:48:17 +0000 (0:00:00.302) 0:00:14.897 ******* 2026-02-16 02:48:25.982544 | orchestrator | ok: [testbed-manager] 2026-02-16 02:48:25.982556 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:48:25.982566 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:48:25.982577 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:48:25.982588 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:48:25.982598 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:48:25.982618 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:48:25.982629 | orchestrator | 2026-02-16 02:48:25.982640 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-16 02:48:25.982651 | orchestrator | Monday 16 February 2026 02:48:17 +0000 (0:00:00.515) 0:00:15.412 ******* 2026-02-16 02:48:25.982661 | orchestrator | ok: [testbed-manager] 2026-02-16 02:48:25.982672 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:48:25.982683 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:48:25.982693 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:48:25.982704 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:48:25.982714 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:48:25.982725 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:48:25.982736 | orchestrator | 2026-02-16 02:48:25.982746 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-16 02:48:25.982762 | orchestrator | Monday 16 February 2026 02:48:18 +0000 (0:00:01.131) 0:00:16.543 ******* 2026-02-16 02:48:25.982779 | orchestrator | ok: [testbed-manager] 2026-02-16 02:48:25.982797 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:48:25.982815 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:48:25.982829 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:48:25.982854 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:48:25.982877 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:48:25.982895 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:48:25.982913 | orchestrator | 2026-02-16 02:48:25.982932 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-16 02:48:25.982962 | orchestrator | Monday 16 February 2026 02:48:20 +0000 (0:00:01.162) 0:00:17.705 ******* 2026-02-16 02:48:25.983006 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:48:25.983026 | orchestrator | 2026-02-16 02:48:25.983045 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-16 02:48:25.983063 | orchestrator | Monday 16 February 2026 02:48:20 +0000 (0:00:00.289) 0:00:17.995 ******* 2026-02-16 02:48:25.983082 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:48:25.983101 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:48:25.983120 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:48:25.983139 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:48:25.983159 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:48:25.983200 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:48:25.983218 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:48:25.983235 | orchestrator | 2026-02-16 02:48:25.983255 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-16 02:48:25.983272 | orchestrator | Monday 16 February 2026 02:48:21 +0000 (0:00:01.305) 0:00:19.301 ******* 2026-02-16 02:48:25.983291 | orchestrator | ok: [testbed-manager] 2026-02-16 02:48:25.983310 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:48:25.983328 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:48:25.983347 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:48:25.983360 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:48:25.983371 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:48:25.983382 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:48:25.983393 | orchestrator | 2026-02-16 02:48:25.983404 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-16 02:48:25.983415 | orchestrator | Monday 16 February 2026 02:48:21 +0000 (0:00:00.195) 0:00:19.496 ******* 2026-02-16 02:48:25.983425 | orchestrator | ok: [testbed-manager] 2026-02-16 02:48:25.983436 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:48:25.983447 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:48:25.983457 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:48:25.983468 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:48:25.983478 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:48:25.983489 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:48:25.983500 | orchestrator | 2026-02-16 02:48:25.983510 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-16 02:48:25.983521 | orchestrator | Monday 16 February 2026 02:48:22 +0000 (0:00:00.203) 0:00:19.700 ******* 2026-02-16 02:48:25.983532 | orchestrator | ok: [testbed-manager] 2026-02-16 02:48:25.983542 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:48:25.983553 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:48:25.983564 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:48:25.983574 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:48:25.983585 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:48:25.983595 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:48:25.983606 | orchestrator | 2026-02-16 02:48:25.983617 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-16 02:48:25.983627 | orchestrator | Monday 16 February 2026 02:48:22 +0000 (0:00:00.221) 0:00:19.921 ******* 2026-02-16 02:48:25.983639 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:48:25.983652 | orchestrator | 2026-02-16 02:48:25.983662 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-16 02:48:25.983673 | orchestrator | Monday 16 February 2026 02:48:22 +0000 (0:00:00.267) 0:00:20.188 ******* 2026-02-16 02:48:25.983684 | orchestrator | ok: [testbed-manager] 2026-02-16 02:48:25.983695 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:48:25.983743 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:48:25.983754 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:48:25.983765 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:48:25.983775 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:48:25.983786 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:48:25.983796 | orchestrator | 2026-02-16 02:48:25.983808 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-16 02:48:25.983819 | orchestrator | Monday 16 February 2026 02:48:23 +0000 (0:00:00.514) 0:00:20.703 ******* 2026-02-16 02:48:25.983829 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:48:25.983840 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:48:25.983851 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:48:25.983862 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:48:25.983873 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:48:25.983883 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:48:25.983894 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:48:25.983904 | orchestrator | 2026-02-16 02:48:25.983916 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-16 02:48:25.983927 | orchestrator | Monday 16 February 2026 02:48:23 +0000 (0:00:00.218) 0:00:20.921 ******* 2026-02-16 02:48:25.983937 | orchestrator | ok: [testbed-manager] 2026-02-16 02:48:25.983948 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:48:25.983959 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:48:25.983969 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:48:25.983980 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:48:25.983991 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:48:25.984002 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:48:25.984012 | orchestrator | 2026-02-16 02:48:25.984023 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-16 02:48:25.984034 | orchestrator | Monday 16 February 2026 02:48:24 +0000 (0:00:01.069) 0:00:21.991 ******* 2026-02-16 02:48:25.984045 | orchestrator | ok: [testbed-manager] 2026-02-16 02:48:25.984055 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:48:25.984066 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:48:25.984077 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:48:25.984087 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:48:25.984098 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:48:25.984119 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:48:25.984130 | orchestrator | 2026-02-16 02:48:25.984141 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-16 02:48:25.984152 | orchestrator | Monday 16 February 2026 02:48:24 +0000 (0:00:00.560) 0:00:22.552 ******* 2026-02-16 02:48:25.984163 | orchestrator | ok: [testbed-manager] 2026-02-16 02:48:25.984218 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:48:25.984232 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:48:25.984243 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:48:25.984265 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:49:05.174123 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:49:05.174231 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:49:05.174245 | orchestrator | 2026-02-16 02:49:05.174258 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-16 02:49:05.174327 | orchestrator | Monday 16 February 2026 02:48:25 +0000 (0:00:01.088) 0:00:23.641 ******* 2026-02-16 02:49:05.174339 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:49:05.174349 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:49:05.174359 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:49:05.174369 | orchestrator | changed: [testbed-manager] 2026-02-16 02:49:05.174379 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:49:05.174389 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:49:05.174399 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:49:05.174408 | orchestrator | 2026-02-16 02:49:05.174418 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-16 02:49:05.174428 | orchestrator | Monday 16 February 2026 02:48:42 +0000 (0:00:16.492) 0:00:40.133 ******* 2026-02-16 02:49:05.174438 | orchestrator | ok: [testbed-manager] 2026-02-16 02:49:05.174472 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:49:05.174482 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:49:05.174492 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:49:05.174501 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:49:05.174511 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:49:05.174520 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:49:05.174530 | orchestrator | 2026-02-16 02:49:05.174540 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-16 02:49:05.174549 | orchestrator | Monday 16 February 2026 02:48:42 +0000 (0:00:00.213) 0:00:40.346 ******* 2026-02-16 02:49:05.174559 | orchestrator | ok: [testbed-manager] 2026-02-16 02:49:05.174568 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:49:05.174578 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:49:05.174587 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:49:05.174596 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:49:05.174606 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:49:05.174616 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:49:05.174628 | orchestrator | 2026-02-16 02:49:05.174639 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-16 02:49:05.174650 | orchestrator | Monday 16 February 2026 02:48:42 +0000 (0:00:00.208) 0:00:40.555 ******* 2026-02-16 02:49:05.174661 | orchestrator | ok: [testbed-manager] 2026-02-16 02:49:05.174672 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:49:05.174683 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:49:05.174695 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:49:05.174706 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:49:05.174716 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:49:05.174726 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:49:05.174736 | orchestrator | 2026-02-16 02:49:05.174746 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-16 02:49:05.174755 | orchestrator | Monday 16 February 2026 02:48:43 +0000 (0:00:00.204) 0:00:40.759 ******* 2026-02-16 02:49:05.174766 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:49:05.174778 | orchestrator | 2026-02-16 02:49:05.174788 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-16 02:49:05.174798 | orchestrator | Monday 16 February 2026 02:48:43 +0000 (0:00:00.245) 0:00:41.005 ******* 2026-02-16 02:49:05.174807 | orchestrator | ok: [testbed-manager] 2026-02-16 02:49:05.174817 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:49:05.174826 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:49:05.174838 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:49:05.174855 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:49:05.174870 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:49:05.174890 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:49:05.174916 | orchestrator | 2026-02-16 02:49:05.174932 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-16 02:49:05.174947 | orchestrator | Monday 16 February 2026 02:48:45 +0000 (0:00:01.757) 0:00:42.763 ******* 2026-02-16 02:49:05.174964 | orchestrator | changed: [testbed-manager] 2026-02-16 02:49:05.174981 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:49:05.174997 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:49:05.175013 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:49:05.175027 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:49:05.175037 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:49:05.175046 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:49:05.175056 | orchestrator | 2026-02-16 02:49:05.175065 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-16 02:49:05.175090 | orchestrator | Monday 16 February 2026 02:48:46 +0000 (0:00:01.023) 0:00:43.786 ******* 2026-02-16 02:49:05.175099 | orchestrator | ok: [testbed-manager] 2026-02-16 02:49:05.175109 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:49:05.175118 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:49:05.175137 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:49:05.175147 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:49:05.175156 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:49:05.175166 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:49:05.175175 | orchestrator | 2026-02-16 02:49:05.175185 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-16 02:49:05.175194 | orchestrator | Monday 16 February 2026 02:48:46 +0000 (0:00:00.768) 0:00:44.554 ******* 2026-02-16 02:49:05.175205 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:49:05.175216 | orchestrator | 2026-02-16 02:49:05.175226 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-16 02:49:05.175236 | orchestrator | Monday 16 February 2026 02:48:47 +0000 (0:00:00.274) 0:00:44.829 ******* 2026-02-16 02:49:05.175246 | orchestrator | changed: [testbed-manager] 2026-02-16 02:49:05.175255 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:49:05.175287 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:49:05.175297 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:49:05.175307 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:49:05.175316 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:49:05.175326 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:49:05.175335 | orchestrator | 2026-02-16 02:49:05.175362 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-16 02:49:05.175372 | orchestrator | Monday 16 February 2026 02:48:48 +0000 (0:00:01.013) 0:00:45.842 ******* 2026-02-16 02:49:05.175382 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:49:05.175392 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:49:05.175403 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:49:05.175414 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:49:05.175424 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:49:05.175435 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:49:05.175446 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:49:05.175460 | orchestrator | 2026-02-16 02:49:05.175480 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-16 02:49:05.175492 | orchestrator | Monday 16 February 2026 02:48:48 +0000 (0:00:00.262) 0:00:46.105 ******* 2026-02-16 02:49:05.175503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:49:05.175515 | orchestrator | 2026-02-16 02:49:05.175525 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-16 02:49:05.175536 | orchestrator | Monday 16 February 2026 02:48:48 +0000 (0:00:00.312) 0:00:46.417 ******* 2026-02-16 02:49:05.175547 | orchestrator | ok: [testbed-manager] 2026-02-16 02:49:05.175558 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:49:05.175568 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:49:05.175579 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:49:05.175590 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:49:05.175600 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:49:05.175611 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:49:05.175622 | orchestrator | 2026-02-16 02:49:05.175633 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-16 02:49:05.175643 | orchestrator | Monday 16 February 2026 02:48:50 +0000 (0:00:01.839) 0:00:48.257 ******* 2026-02-16 02:49:05.175654 | orchestrator | changed: [testbed-manager] 2026-02-16 02:49:05.175665 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:49:05.175676 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:49:05.175687 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:49:05.175697 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:49:05.175708 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:49:05.175719 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:49:05.175737 | orchestrator | 2026-02-16 02:49:05.175748 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-16 02:49:05.175759 | orchestrator | Monday 16 February 2026 02:48:51 +0000 (0:00:01.128) 0:00:49.386 ******* 2026-02-16 02:49:05.175770 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:49:05.175780 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:49:05.175791 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:49:05.175802 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:49:05.175813 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:49:05.175823 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:49:05.175834 | orchestrator | changed: [testbed-manager] 2026-02-16 02:49:05.175845 | orchestrator | 2026-02-16 02:49:05.175856 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-16 02:49:05.175867 | orchestrator | Monday 16 February 2026 02:49:02 +0000 (0:00:10.631) 0:01:00.018 ******* 2026-02-16 02:49:05.175878 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:49:05.175888 | orchestrator | ok: [testbed-manager] 2026-02-16 02:49:05.175899 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:49:05.175910 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:49:05.175920 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:49:05.175931 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:49:05.175942 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:49:05.175953 | orchestrator | 2026-02-16 02:49:05.175963 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-16 02:49:05.175974 | orchestrator | Monday 16 February 2026 02:49:03 +0000 (0:00:01.313) 0:01:01.331 ******* 2026-02-16 02:49:05.175985 | orchestrator | ok: [testbed-manager] 2026-02-16 02:49:05.175996 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:49:05.176007 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:49:05.176030 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:49:05.176041 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:49:05.176051 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:49:05.176062 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:49:05.176072 | orchestrator | 2026-02-16 02:49:05.176083 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-16 02:49:05.176094 | orchestrator | Monday 16 February 2026 02:49:04 +0000 (0:00:00.841) 0:01:02.172 ******* 2026-02-16 02:49:05.176111 | orchestrator | ok: [testbed-manager] 2026-02-16 02:49:05.176122 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:49:05.176132 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:49:05.176143 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:49:05.176153 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:49:05.176164 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:49:05.176174 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:49:05.176185 | orchestrator | 2026-02-16 02:49:05.176196 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-16 02:49:05.176207 | orchestrator | Monday 16 February 2026 02:49:04 +0000 (0:00:00.206) 0:01:02.378 ******* 2026-02-16 02:49:05.176218 | orchestrator | ok: [testbed-manager] 2026-02-16 02:49:05.176228 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:49:05.176239 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:49:05.176249 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:49:05.176260 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:49:05.176290 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:49:05.176301 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:49:05.176311 | orchestrator | 2026-02-16 02:49:05.176322 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-16 02:49:05.176333 | orchestrator | Monday 16 February 2026 02:49:04 +0000 (0:00:00.192) 0:01:02.571 ******* 2026-02-16 02:49:05.176344 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:49:05.176356 | orchestrator | 2026-02-16 02:49:05.176373 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-16 02:51:27.037683 | orchestrator | Monday 16 February 2026 02:49:05 +0000 (0:00:00.270) 0:01:02.842 ******* 2026-02-16 02:51:27.037795 | orchestrator | ok: [testbed-manager] 2026-02-16 02:51:27.037811 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:51:27.037823 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:51:27.037835 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:51:27.037845 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:51:27.037856 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:51:27.037867 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:51:27.037878 | orchestrator | 2026-02-16 02:51:27.037890 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-16 02:51:27.037902 | orchestrator | Monday 16 February 2026 02:49:06 +0000 (0:00:01.756) 0:01:04.598 ******* 2026-02-16 02:51:27.037912 | orchestrator | changed: [testbed-manager] 2026-02-16 02:51:27.037924 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:51:27.037935 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:51:27.037946 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:51:27.037956 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:51:27.037967 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:51:27.037978 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:51:27.037988 | orchestrator | 2026-02-16 02:51:27.038000 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-16 02:51:27.038073 | orchestrator | Monday 16 February 2026 02:49:07 +0000 (0:00:00.614) 0:01:05.213 ******* 2026-02-16 02:51:27.038089 | orchestrator | ok: [testbed-manager] 2026-02-16 02:51:27.038100 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:51:27.038111 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:51:27.038122 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:51:27.038132 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:51:27.038143 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:51:27.038154 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:51:27.038164 | orchestrator | 2026-02-16 02:51:27.038186 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-16 02:51:27.038197 | orchestrator | Monday 16 February 2026 02:49:07 +0000 (0:00:00.227) 0:01:05.440 ******* 2026-02-16 02:51:27.038208 | orchestrator | ok: [testbed-manager] 2026-02-16 02:51:27.038219 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:51:27.038230 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:51:27.038240 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:51:27.038251 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:51:27.038261 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:51:27.038272 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:51:27.038283 | orchestrator | 2026-02-16 02:51:27.038293 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-16 02:51:27.038304 | orchestrator | Monday 16 February 2026 02:49:09 +0000 (0:00:01.267) 0:01:06.708 ******* 2026-02-16 02:51:27.038315 | orchestrator | changed: [testbed-manager] 2026-02-16 02:51:27.038326 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:51:27.038337 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:51:27.038347 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:51:27.038358 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:51:27.038369 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:51:27.038380 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:51:27.038391 | orchestrator | 2026-02-16 02:51:27.038406 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-16 02:51:27.038418 | orchestrator | Monday 16 February 2026 02:49:10 +0000 (0:00:01.816) 0:01:08.525 ******* 2026-02-16 02:51:27.038429 | orchestrator | ok: [testbed-manager] 2026-02-16 02:51:27.038439 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:51:27.038450 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:51:27.038461 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:51:27.038471 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:51:27.038482 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:51:27.038493 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:51:27.038503 | orchestrator | 2026-02-16 02:51:27.038514 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-16 02:51:27.038548 | orchestrator | Monday 16 February 2026 02:49:13 +0000 (0:00:02.499) 0:01:11.024 ******* 2026-02-16 02:51:27.038560 | orchestrator | ok: [testbed-manager] 2026-02-16 02:51:27.038571 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:51:27.038581 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:51:27.038630 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:51:27.038642 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:51:27.038653 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:51:27.038663 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:51:27.038674 | orchestrator | 2026-02-16 02:51:27.038685 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-16 02:51:27.038696 | orchestrator | Monday 16 February 2026 02:49:46 +0000 (0:00:33.596) 0:01:44.621 ******* 2026-02-16 02:51:27.038707 | orchestrator | changed: [testbed-manager] 2026-02-16 02:51:27.038717 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:51:27.038728 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:51:27.038739 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:51:27.038750 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:51:27.038760 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:51:27.038771 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:51:27.038781 | orchestrator | 2026-02-16 02:51:27.038792 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-16 02:51:27.038803 | orchestrator | Monday 16 February 2026 02:51:13 +0000 (0:01:26.079) 0:03:10.700 ******* 2026-02-16 02:51:27.038814 | orchestrator | ok: [testbed-manager] 2026-02-16 02:51:27.038825 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:51:27.038836 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:51:27.038847 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:51:27.038857 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:51:27.038868 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:51:27.038878 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:51:27.038889 | orchestrator | 2026-02-16 02:51:27.038899 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-16 02:51:27.038910 | orchestrator | Monday 16 February 2026 02:51:14 +0000 (0:00:01.740) 0:03:12.441 ******* 2026-02-16 02:51:27.038921 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:51:27.038931 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:51:27.038942 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:51:27.038952 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:51:27.038962 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:51:27.038973 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:51:27.038983 | orchestrator | changed: [testbed-manager] 2026-02-16 02:51:27.038994 | orchestrator | 2026-02-16 02:51:27.039005 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-16 02:51:27.039015 | orchestrator | Monday 16 February 2026 02:51:25 +0000 (0:00:11.093) 0:03:23.534 ******* 2026-02-16 02:51:27.039060 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-16 02:51:27.039096 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-16 02:51:27.039120 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-16 02:51:27.039133 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-16 02:51:27.039144 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-16 02:51:27.039155 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-16 02:51:27.039166 | orchestrator | 2026-02-16 02:51:27.039177 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-16 02:51:27.039188 | orchestrator | Monday 16 February 2026 02:51:26 +0000 (0:00:00.432) 0:03:23.967 ******* 2026-02-16 02:51:27.039199 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-16 02:51:27.039210 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:51:27.039221 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-16 02:51:27.039231 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:51:27.039242 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-16 02:51:27.039253 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:51:27.039268 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-16 02:51:27.039279 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:51:27.039290 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-16 02:51:27.039301 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-16 02:51:27.039311 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-16 02:51:27.039322 | orchestrator | 2026-02-16 02:51:27.039333 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-16 02:51:27.039343 | orchestrator | Monday 16 February 2026 02:51:26 +0000 (0:00:00.651) 0:03:24.618 ******* 2026-02-16 02:51:27.039354 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-16 02:51:27.039366 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-16 02:51:27.039377 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-16 02:51:27.039388 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-16 02:51:27.039398 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-16 02:51:27.039416 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-16 02:51:32.738709 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-16 02:51:32.738836 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-16 02:51:32.738878 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-16 02:51:32.738891 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-16 02:51:32.738903 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-16 02:51:32.738914 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-16 02:51:32.738924 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-16 02:51:32.738935 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-16 02:51:32.738946 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-16 02:51:32.738957 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-16 02:51:32.738969 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-16 02:51:32.738980 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-16 02:51:32.738991 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-16 02:51:32.739001 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-16 02:51:32.739013 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:51:32.739025 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:51:32.739036 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-16 02:51:32.739047 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-16 02:51:32.739058 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-16 02:51:32.739069 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-16 02:51:32.739080 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-16 02:51:32.739090 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-16 02:51:32.739101 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-16 02:51:32.739112 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-16 02:51:32.739122 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-16 02:51:32.739133 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-16 02:51:32.739144 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-16 02:51:32.739155 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-16 02:51:32.739166 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-16 02:51:32.739177 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-16 02:51:32.739202 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-16 02:51:32.739215 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-16 02:51:32.739228 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-16 02:51:32.739241 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-16 02:51:32.739253 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-16 02:51:32.739273 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-16 02:51:32.739286 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:51:32.739299 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:51:32.739311 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-16 02:51:32.739324 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-16 02:51:32.739336 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-16 02:51:32.739348 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-16 02:51:32.739360 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-16 02:51:32.739392 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-16 02:51:32.739405 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-16 02:51:32.739417 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-16 02:51:32.739429 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-16 02:51:32.739441 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-16 02:51:32.739454 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-16 02:51:32.739466 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-16 02:51:32.739477 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-16 02:51:32.739487 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-16 02:51:32.739498 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-16 02:51:32.739509 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-16 02:51:32.739520 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-16 02:51:32.739531 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-16 02:51:32.739541 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-16 02:51:32.739552 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-16 02:51:32.739563 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-16 02:51:32.739573 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-16 02:51:32.739584 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-16 02:51:32.739595 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-16 02:51:32.739632 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-16 02:51:32.739644 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-16 02:51:32.739655 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-16 02:51:32.739666 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-16 02:51:32.739676 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-16 02:51:32.739688 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-16 02:51:32.739706 | orchestrator | 2026-02-16 02:51:32.739719 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-16 02:51:32.739731 | orchestrator | Monday 16 February 2026 02:51:31 +0000 (0:00:04.701) 0:03:29.319 ******* 2026-02-16 02:51:32.739749 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-16 02:51:32.739768 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-16 02:51:32.739787 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-16 02:51:32.739806 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-16 02:51:32.739833 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-16 02:51:32.739852 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-16 02:51:32.739868 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-16 02:51:32.739879 | orchestrator | 2026-02-16 02:51:32.739891 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-16 02:51:32.739901 | orchestrator | Monday 16 February 2026 02:51:32 +0000 (0:00:00.606) 0:03:29.926 ******* 2026-02-16 02:51:32.739912 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-16 02:51:32.739929 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:51:32.739947 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-16 02:51:32.739967 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:51:32.739985 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-16 02:51:32.740000 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:51:32.740011 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-16 02:51:32.740022 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:51:32.740033 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-16 02:51:32.740044 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-16 02:51:32.740067 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-16 02:51:46.632807 | orchestrator | 2026-02-16 02:51:46.632919 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-16 02:51:46.632935 | orchestrator | Monday 16 February 2026 02:51:32 +0000 (0:00:00.478) 0:03:30.404 ******* 2026-02-16 02:51:46.632947 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-16 02:51:46.632959 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:51:46.632972 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-16 02:51:46.632983 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-16 02:51:46.632994 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:51:46.633005 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-16 02:51:46.633016 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:51:46.633027 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:51:46.633038 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-16 02:51:46.633049 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-16 02:51:46.633060 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-16 02:51:46.633071 | orchestrator | 2026-02-16 02:51:46.633082 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-16 02:51:46.633117 | orchestrator | Monday 16 February 2026 02:51:33 +0000 (0:00:00.573) 0:03:30.978 ******* 2026-02-16 02:51:46.633129 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-16 02:51:46.633140 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:51:46.633151 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-16 02:51:46.633162 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-16 02:51:46.633172 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:51:46.633183 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:51:46.633194 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-16 02:51:46.633204 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:51:46.633215 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-16 02:51:46.633226 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-16 02:51:46.633237 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-16 02:51:46.633248 | orchestrator | 2026-02-16 02:51:46.633259 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-16 02:51:46.633270 | orchestrator | Monday 16 February 2026 02:51:34 +0000 (0:00:01.570) 0:03:32.549 ******* 2026-02-16 02:51:46.633280 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:51:46.633293 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:51:46.633306 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:51:46.633318 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:51:46.633330 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:51:46.633342 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:51:46.633355 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:51:46.633367 | orchestrator | 2026-02-16 02:51:46.633380 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-16 02:51:46.633392 | orchestrator | Monday 16 February 2026 02:51:35 +0000 (0:00:00.292) 0:03:32.841 ******* 2026-02-16 02:51:46.633405 | orchestrator | ok: [testbed-manager] 2026-02-16 02:51:46.633418 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:51:46.633430 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:51:46.633443 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:51:46.633455 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:51:46.633467 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:51:46.633479 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:51:46.633491 | orchestrator | 2026-02-16 02:51:46.633504 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-16 02:51:46.633517 | orchestrator | Monday 16 February 2026 02:51:40 +0000 (0:00:05.752) 0:03:38.594 ******* 2026-02-16 02:51:46.633529 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-16 02:51:46.633542 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-16 02:51:46.633554 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:51:46.633567 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:51:46.633579 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-16 02:51:46.633591 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-16 02:51:46.633603 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:51:46.633615 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-16 02:51:46.633628 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:51:46.633670 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-16 02:51:46.633700 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:51:46.633712 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:51:46.633723 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-16 02:51:46.633734 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:51:46.633753 | orchestrator | 2026-02-16 02:51:46.633764 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-16 02:51:46.633775 | orchestrator | Monday 16 February 2026 02:51:41 +0000 (0:00:00.276) 0:03:38.871 ******* 2026-02-16 02:51:46.633786 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-16 02:51:46.633797 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-16 02:51:46.633808 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-16 02:51:46.633837 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-16 02:51:46.633849 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-16 02:51:46.633859 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-16 02:51:46.633870 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-16 02:51:46.633881 | orchestrator | 2026-02-16 02:51:46.633892 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-16 02:51:46.633902 | orchestrator | Monday 16 February 2026 02:51:42 +0000 (0:00:01.001) 0:03:39.872 ******* 2026-02-16 02:51:46.633915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:51:46.633928 | orchestrator | 2026-02-16 02:51:46.633939 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-16 02:51:46.633950 | orchestrator | Monday 16 February 2026 02:51:42 +0000 (0:00:00.473) 0:03:40.346 ******* 2026-02-16 02:51:46.633961 | orchestrator | ok: [testbed-manager] 2026-02-16 02:51:46.633972 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:51:46.633982 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:51:46.633993 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:51:46.634004 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:51:46.634075 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:51:46.634089 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:51:46.634100 | orchestrator | 2026-02-16 02:51:46.634111 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-16 02:51:46.634122 | orchestrator | Monday 16 February 2026 02:51:43 +0000 (0:00:01.198) 0:03:41.544 ******* 2026-02-16 02:51:46.634133 | orchestrator | ok: [testbed-manager] 2026-02-16 02:51:46.634154 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:51:46.634165 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:51:46.634175 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:51:46.634186 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:51:46.634197 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:51:46.634207 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:51:46.634218 | orchestrator | 2026-02-16 02:51:46.634229 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-16 02:51:46.634240 | orchestrator | Monday 16 February 2026 02:51:44 +0000 (0:00:00.606) 0:03:42.150 ******* 2026-02-16 02:51:46.634250 | orchestrator | changed: [testbed-manager] 2026-02-16 02:51:46.634261 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:51:46.634272 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:51:46.634282 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:51:46.634293 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:51:46.634304 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:51:46.634314 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:51:46.634325 | orchestrator | 2026-02-16 02:51:46.634336 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-16 02:51:46.634347 | orchestrator | Monday 16 February 2026 02:51:45 +0000 (0:00:00.594) 0:03:42.744 ******* 2026-02-16 02:51:46.634358 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:51:46.634368 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:51:46.634379 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:51:46.634390 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:51:46.634400 | orchestrator | ok: [testbed-manager] 2026-02-16 02:51:46.634411 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:51:46.634421 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:51:46.634432 | orchestrator | 2026-02-16 02:51:46.634443 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-16 02:51:46.634463 | orchestrator | Monday 16 February 2026 02:51:45 +0000 (0:00:00.607) 0:03:43.352 ******* 2026-02-16 02:51:46.634483 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771208837.5559485, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 02:51:46.634498 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771208865.7839668, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 02:51:46.634510 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771208874.6346424, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 02:51:46.634544 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771208862.3218555, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 02:51:51.244103 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771208861.080825, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 02:51:51.244246 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771208863.4771647, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 02:51:51.244276 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771208869.818138, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 02:51:51.244330 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 02:51:51.244358 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 02:51:51.244370 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 02:51:51.244381 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 02:51:51.244422 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 02:51:51.244435 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 02:51:51.244446 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 02:51:51.244467 | orchestrator | 2026-02-16 02:51:51.244480 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-16 02:51:51.244492 | orchestrator | Monday 16 February 2026 02:51:46 +0000 (0:00:00.944) 0:03:44.297 ******* 2026-02-16 02:51:51.244503 | orchestrator | changed: [testbed-manager] 2026-02-16 02:51:51.244516 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:51:51.244526 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:51:51.244537 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:51:51.244549 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:51:51.244560 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:51:51.244571 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:51:51.244581 | orchestrator | 2026-02-16 02:51:51.244593 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-16 02:51:51.244604 | orchestrator | Monday 16 February 2026 02:51:47 +0000 (0:00:01.096) 0:03:45.393 ******* 2026-02-16 02:51:51.244615 | orchestrator | changed: [testbed-manager] 2026-02-16 02:51:51.244628 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:51:51.244640 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:51:51.244687 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:51:51.244699 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:51:51.244711 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:51:51.244723 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:51:51.244734 | orchestrator | 2026-02-16 02:51:51.244752 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-16 02:51:51.244765 | orchestrator | Monday 16 February 2026 02:51:48 +0000 (0:00:01.120) 0:03:46.514 ******* 2026-02-16 02:51:51.244778 | orchestrator | changed: [testbed-manager] 2026-02-16 02:51:51.244790 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:51:51.244802 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:51:51.244815 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:51:51.244826 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:51:51.244838 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:51:51.244849 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:51:51.244861 | orchestrator | 2026-02-16 02:51:51.244873 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-16 02:51:51.244885 | orchestrator | Monday 16 February 2026 02:51:49 +0000 (0:00:01.061) 0:03:47.575 ******* 2026-02-16 02:51:51.244898 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:51:51.244910 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:51:51.244922 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:51:51.244934 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:51:51.244946 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:51:51.244958 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:51:51.244970 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:51:51.244981 | orchestrator | 2026-02-16 02:51:51.244992 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-16 02:51:51.245002 | orchestrator | Monday 16 February 2026 02:51:50 +0000 (0:00:00.262) 0:03:47.838 ******* 2026-02-16 02:51:51.245013 | orchestrator | ok: [testbed-manager] 2026-02-16 02:51:51.245025 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:51:51.245035 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:51:51.245046 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:51:51.245057 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:51:51.245067 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:51:51.245078 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:51:51.245089 | orchestrator | 2026-02-16 02:51:51.245100 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-16 02:51:51.245110 | orchestrator | Monday 16 February 2026 02:51:50 +0000 (0:00:00.690) 0:03:48.528 ******* 2026-02-16 02:51:51.245123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:51:51.245145 | orchestrator | 2026-02-16 02:51:51.245156 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-16 02:51:51.245175 | orchestrator | Monday 16 February 2026 02:51:51 +0000 (0:00:00.384) 0:03:48.913 ******* 2026-02-16 02:53:06.458712 | orchestrator | ok: [testbed-manager] 2026-02-16 02:53:06.458880 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:53:06.458904 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:53:06.458916 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:53:06.458928 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:53:06.458939 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:53:06.458950 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:53:06.458962 | orchestrator | 2026-02-16 02:53:06.458975 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-16 02:53:06.458987 | orchestrator | Monday 16 February 2026 02:51:59 +0000 (0:00:08.481) 0:03:57.395 ******* 2026-02-16 02:53:06.458998 | orchestrator | ok: [testbed-manager] 2026-02-16 02:53:06.459010 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:53:06.459021 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:53:06.459032 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:53:06.459043 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:53:06.459054 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:53:06.459065 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:53:06.459075 | orchestrator | 2026-02-16 02:53:06.459087 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-16 02:53:06.459098 | orchestrator | Monday 16 February 2026 02:52:00 +0000 (0:00:01.205) 0:03:58.600 ******* 2026-02-16 02:53:06.459109 | orchestrator | ok: [testbed-manager] 2026-02-16 02:53:06.459120 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:53:06.459130 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:53:06.459141 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:53:06.459152 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:53:06.459163 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:53:06.459173 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:53:06.459184 | orchestrator | 2026-02-16 02:53:06.459195 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-16 02:53:06.459206 | orchestrator | Monday 16 February 2026 02:52:01 +0000 (0:00:01.042) 0:03:59.643 ******* 2026-02-16 02:53:06.459220 | orchestrator | ok: [testbed-manager] 2026-02-16 02:53:06.459239 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:53:06.459258 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:53:06.459277 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:53:06.459297 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:53:06.459320 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:53:06.459340 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:53:06.459360 | orchestrator | 2026-02-16 02:53:06.459380 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-16 02:53:06.459401 | orchestrator | Monday 16 February 2026 02:52:02 +0000 (0:00:00.266) 0:03:59.909 ******* 2026-02-16 02:53:06.459416 | orchestrator | ok: [testbed-manager] 2026-02-16 02:53:06.459428 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:53:06.459440 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:53:06.459453 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:53:06.459464 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:53:06.459474 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:53:06.459485 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:53:06.459496 | orchestrator | 2026-02-16 02:53:06.459507 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-16 02:53:06.459518 | orchestrator | Monday 16 February 2026 02:52:02 +0000 (0:00:00.270) 0:04:00.179 ******* 2026-02-16 02:53:06.459529 | orchestrator | ok: [testbed-manager] 2026-02-16 02:53:06.459540 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:53:06.459551 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:53:06.459587 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:53:06.459599 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:53:06.459609 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:53:06.459619 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:53:06.459630 | orchestrator | 2026-02-16 02:53:06.459641 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-16 02:53:06.459652 | orchestrator | Monday 16 February 2026 02:52:02 +0000 (0:00:00.250) 0:04:00.430 ******* 2026-02-16 02:53:06.459663 | orchestrator | ok: [testbed-manager] 2026-02-16 02:53:06.459673 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:53:06.459684 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:53:06.459695 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:53:06.459705 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:53:06.459715 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:53:06.459726 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:53:06.459736 | orchestrator | 2026-02-16 02:53:06.459747 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-16 02:53:06.459758 | orchestrator | Monday 16 February 2026 02:52:08 +0000 (0:00:05.697) 0:04:06.127 ******* 2026-02-16 02:53:06.459771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:53:06.459785 | orchestrator | 2026-02-16 02:53:06.459804 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-16 02:53:06.459863 | orchestrator | Monday 16 February 2026 02:52:08 +0000 (0:00:00.368) 0:04:06.496 ******* 2026-02-16 02:53:06.459883 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-16 02:53:06.459902 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-16 02:53:06.459922 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-16 02:53:06.459941 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:53:06.459960 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-16 02:53:06.459992 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-16 02:53:06.460004 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-16 02:53:06.460015 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:53:06.460025 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-16 02:53:06.460036 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:53:06.460047 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-16 02:53:06.460057 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-16 02:53:06.460068 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:53:06.460079 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-16 02:53:06.460090 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-16 02:53:06.460101 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-16 02:53:06.460131 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:53:06.460143 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:53:06.460154 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-16 02:53:06.460165 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-16 02:53:06.460175 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:53:06.460186 | orchestrator | 2026-02-16 02:53:06.460198 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-16 02:53:06.460208 | orchestrator | Monday 16 February 2026 02:52:09 +0000 (0:00:00.310) 0:04:06.806 ******* 2026-02-16 02:53:06.460220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:53:06.460232 | orchestrator | 2026-02-16 02:53:06.460242 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-16 02:53:06.460264 | orchestrator | Monday 16 February 2026 02:52:09 +0000 (0:00:00.382) 0:04:07.188 ******* 2026-02-16 02:53:06.460275 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-16 02:53:06.460287 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:53:06.460306 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-16 02:53:06.460325 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-16 02:53:06.460343 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:53:06.460362 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-16 02:53:06.460381 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:53:06.460400 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:53:06.460419 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-16 02:53:06.460438 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-16 02:53:06.460456 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:53:06.460468 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:53:06.460479 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-16 02:53:06.460489 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:53:06.460500 | orchestrator | 2026-02-16 02:53:06.460511 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-16 02:53:06.460521 | orchestrator | Monday 16 February 2026 02:52:09 +0000 (0:00:00.267) 0:04:07.456 ******* 2026-02-16 02:53:06.460533 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:53:06.460544 | orchestrator | 2026-02-16 02:53:06.460554 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-16 02:53:06.460565 | orchestrator | Monday 16 February 2026 02:52:10 +0000 (0:00:00.372) 0:04:07.828 ******* 2026-02-16 02:53:06.460576 | orchestrator | changed: [testbed-manager] 2026-02-16 02:53:06.460586 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:53:06.460597 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:53:06.460607 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:53:06.460624 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:53:06.460635 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:53:06.460646 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:53:06.460657 | orchestrator | 2026-02-16 02:53:06.460667 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-16 02:53:06.460678 | orchestrator | Monday 16 February 2026 02:52:44 +0000 (0:00:34.232) 0:04:42.060 ******* 2026-02-16 02:53:06.460689 | orchestrator | changed: [testbed-manager] 2026-02-16 02:53:06.460699 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:53:06.460710 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:53:06.460720 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:53:06.460731 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:53:06.460741 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:53:06.460752 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:53:06.460762 | orchestrator | 2026-02-16 02:53:06.460773 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-16 02:53:06.460783 | orchestrator | Monday 16 February 2026 02:52:52 +0000 (0:00:07.643) 0:04:49.703 ******* 2026-02-16 02:53:06.460830 | orchestrator | changed: [testbed-manager] 2026-02-16 02:53:06.460842 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:53:06.460852 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:53:06.460863 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:53:06.460874 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:53:06.460884 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:53:06.460895 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:53:06.460905 | orchestrator | 2026-02-16 02:53:06.460916 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-16 02:53:06.460936 | orchestrator | Monday 16 February 2026 02:52:59 +0000 (0:00:07.234) 0:04:56.938 ******* 2026-02-16 02:53:06.460946 | orchestrator | ok: [testbed-manager] 2026-02-16 02:53:06.460957 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:53:06.460968 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:53:06.460979 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:53:06.460989 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:53:06.461000 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:53:06.461010 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:53:06.461021 | orchestrator | 2026-02-16 02:53:06.461032 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-16 02:53:06.461043 | orchestrator | Monday 16 February 2026 02:53:00 +0000 (0:00:01.645) 0:04:58.583 ******* 2026-02-16 02:53:06.461053 | orchestrator | changed: [testbed-manager] 2026-02-16 02:53:06.461064 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:53:06.461075 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:53:06.461086 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:53:06.461096 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:53:06.461107 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:53:06.461118 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:53:06.461129 | orchestrator | 2026-02-16 02:53:06.461150 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-16 02:53:17.208059 | orchestrator | Monday 16 February 2026 02:53:06 +0000 (0:00:05.531) 0:05:04.114 ******* 2026-02-16 02:53:17.208172 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:53:17.208189 | orchestrator | 2026-02-16 02:53:17.208202 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-16 02:53:17.208215 | orchestrator | Monday 16 February 2026 02:53:06 +0000 (0:00:00.538) 0:05:04.653 ******* 2026-02-16 02:53:17.208227 | orchestrator | changed: [testbed-manager] 2026-02-16 02:53:17.208240 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:53:17.208251 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:53:17.208261 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:53:17.208272 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:53:17.208283 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:53:17.208294 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:53:17.208305 | orchestrator | 2026-02-16 02:53:17.208316 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-16 02:53:17.208327 | orchestrator | Monday 16 February 2026 02:53:07 +0000 (0:00:00.727) 0:05:05.380 ******* 2026-02-16 02:53:17.208338 | orchestrator | ok: [testbed-manager] 2026-02-16 02:53:17.208350 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:53:17.208361 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:53:17.208372 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:53:17.208382 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:53:17.208393 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:53:17.208403 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:53:17.208414 | orchestrator | 2026-02-16 02:53:17.208425 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-16 02:53:17.208436 | orchestrator | Monday 16 February 2026 02:53:09 +0000 (0:00:01.617) 0:05:06.997 ******* 2026-02-16 02:53:17.208446 | orchestrator | changed: [testbed-manager] 2026-02-16 02:53:17.208457 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:53:17.208468 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:53:17.208479 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:53:17.208489 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:53:17.208501 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:53:17.208512 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:53:17.208523 | orchestrator | 2026-02-16 02:53:17.208534 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-16 02:53:17.208545 | orchestrator | Monday 16 February 2026 02:53:10 +0000 (0:00:00.767) 0:05:07.765 ******* 2026-02-16 02:53:17.208582 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:53:17.208595 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:53:17.208607 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:53:17.208619 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:53:17.208631 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:53:17.208644 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:53:17.208656 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:53:17.208668 | orchestrator | 2026-02-16 02:53:17.208681 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-16 02:53:17.208694 | orchestrator | Monday 16 February 2026 02:53:10 +0000 (0:00:00.255) 0:05:08.020 ******* 2026-02-16 02:53:17.208706 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:53:17.208719 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:53:17.208731 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:53:17.208759 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:53:17.208771 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:53:17.208783 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:53:17.208795 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:53:17.208808 | orchestrator | 2026-02-16 02:53:17.208822 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-16 02:53:17.208863 | orchestrator | Monday 16 February 2026 02:53:10 +0000 (0:00:00.384) 0:05:08.404 ******* 2026-02-16 02:53:17.208875 | orchestrator | ok: [testbed-manager] 2026-02-16 02:53:17.208885 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:53:17.208896 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:53:17.208907 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:53:17.208918 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:53:17.208928 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:53:17.208939 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:53:17.208949 | orchestrator | 2026-02-16 02:53:17.208960 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-16 02:53:17.208971 | orchestrator | Monday 16 February 2026 02:53:11 +0000 (0:00:00.300) 0:05:08.705 ******* 2026-02-16 02:53:17.208982 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:53:17.208993 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:53:17.209003 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:53:17.209014 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:53:17.209025 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:53:17.209035 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:53:17.209046 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:53:17.209057 | orchestrator | 2026-02-16 02:53:17.209068 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-16 02:53:17.209079 | orchestrator | Monday 16 February 2026 02:53:11 +0000 (0:00:00.291) 0:05:08.997 ******* 2026-02-16 02:53:17.209090 | orchestrator | ok: [testbed-manager] 2026-02-16 02:53:17.209101 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:53:17.209112 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:53:17.209123 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:53:17.209133 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:53:17.209144 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:53:17.209155 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:53:17.209166 | orchestrator | 2026-02-16 02:53:17.209177 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-16 02:53:17.209187 | orchestrator | Monday 16 February 2026 02:53:11 +0000 (0:00:00.287) 0:05:09.284 ******* 2026-02-16 02:53:17.209198 | orchestrator | ok: [testbed-manager] =>  2026-02-16 02:53:17.209209 | orchestrator |  docker_version: 5:27.5.1 2026-02-16 02:53:17.209220 | orchestrator | ok: [testbed-node-3] =>  2026-02-16 02:53:17.209231 | orchestrator |  docker_version: 5:27.5.1 2026-02-16 02:53:17.209241 | orchestrator | ok: [testbed-node-4] =>  2026-02-16 02:53:17.209252 | orchestrator |  docker_version: 5:27.5.1 2026-02-16 02:53:17.209263 | orchestrator | ok: [testbed-node-5] =>  2026-02-16 02:53:17.209274 | orchestrator |  docker_version: 5:27.5.1 2026-02-16 02:53:17.209311 | orchestrator | ok: [testbed-node-0] =>  2026-02-16 02:53:17.209323 | orchestrator |  docker_version: 5:27.5.1 2026-02-16 02:53:17.209334 | orchestrator | ok: [testbed-node-1] =>  2026-02-16 02:53:17.209345 | orchestrator |  docker_version: 5:27.5.1 2026-02-16 02:53:17.209356 | orchestrator | ok: [testbed-node-2] =>  2026-02-16 02:53:17.209367 | orchestrator |  docker_version: 5:27.5.1 2026-02-16 02:53:17.209377 | orchestrator | 2026-02-16 02:53:17.209388 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-16 02:53:17.209399 | orchestrator | Monday 16 February 2026 02:53:11 +0000 (0:00:00.278) 0:05:09.563 ******* 2026-02-16 02:53:17.209410 | orchestrator | ok: [testbed-manager] =>  2026-02-16 02:53:17.209421 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-16 02:53:17.209431 | orchestrator | ok: [testbed-node-3] =>  2026-02-16 02:53:17.209442 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-16 02:53:17.209453 | orchestrator | ok: [testbed-node-4] =>  2026-02-16 02:53:17.209463 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-16 02:53:17.209474 | orchestrator | ok: [testbed-node-5] =>  2026-02-16 02:53:17.209485 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-16 02:53:17.209495 | orchestrator | ok: [testbed-node-0] =>  2026-02-16 02:53:17.209506 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-16 02:53:17.209516 | orchestrator | ok: [testbed-node-1] =>  2026-02-16 02:53:17.209527 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-16 02:53:17.209538 | orchestrator | ok: [testbed-node-2] =>  2026-02-16 02:53:17.209549 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-16 02:53:17.209559 | orchestrator | 2026-02-16 02:53:17.209570 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-16 02:53:17.209581 | orchestrator | Monday 16 February 2026 02:53:12 +0000 (0:00:00.293) 0:05:09.857 ******* 2026-02-16 02:53:17.209592 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:53:17.209603 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:53:17.209614 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:53:17.209624 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:53:17.209635 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:53:17.209645 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:53:17.209656 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:53:17.209667 | orchestrator | 2026-02-16 02:53:17.209678 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-16 02:53:17.209689 | orchestrator | Monday 16 February 2026 02:53:12 +0000 (0:00:00.246) 0:05:10.103 ******* 2026-02-16 02:53:17.209700 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:53:17.209710 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:53:17.209721 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:53:17.209732 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:53:17.209742 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:53:17.209753 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:53:17.209764 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:53:17.209774 | orchestrator | 2026-02-16 02:53:17.209793 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-16 02:53:17.209812 | orchestrator | Monday 16 February 2026 02:53:12 +0000 (0:00:00.305) 0:05:10.409 ******* 2026-02-16 02:53:17.209880 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:53:17.209904 | orchestrator | 2026-02-16 02:53:17.209931 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-16 02:53:17.209952 | orchestrator | Monday 16 February 2026 02:53:13 +0000 (0:00:00.400) 0:05:10.809 ******* 2026-02-16 02:53:17.209964 | orchestrator | ok: [testbed-manager] 2026-02-16 02:53:17.209974 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:53:17.209985 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:53:17.209996 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:53:17.210007 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:53:17.210084 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:53:17.210096 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:53:17.210107 | orchestrator | 2026-02-16 02:53:17.210118 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-16 02:53:17.210129 | orchestrator | Monday 16 February 2026 02:53:14 +0000 (0:00:00.924) 0:05:11.734 ******* 2026-02-16 02:53:17.210149 | orchestrator | ok: [testbed-manager] 2026-02-16 02:53:17.210160 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:53:17.210171 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:53:17.210182 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:53:17.210193 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:53:17.210203 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:53:17.210214 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:53:17.210225 | orchestrator | 2026-02-16 02:53:17.210235 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-16 02:53:17.210247 | orchestrator | Monday 16 February 2026 02:53:16 +0000 (0:00:02.777) 0:05:14.512 ******* 2026-02-16 02:53:17.210258 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-16 02:53:17.210270 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-16 02:53:17.210280 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-16 02:53:17.210291 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:53:17.210302 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-16 02:53:17.210313 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-16 02:53:17.210324 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-16 02:53:17.210335 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:53:17.210345 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-16 02:53:17.210356 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-16 02:53:17.210367 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-16 02:53:17.210377 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:53:17.210388 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-16 02:53:17.210399 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-16 02:53:17.210410 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-16 02:53:17.210421 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:53:17.210442 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-16 02:54:15.240138 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-16 02:54:15.240251 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-16 02:54:15.240266 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:54:15.240279 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-16 02:54:15.240290 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-16 02:54:15.240301 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-16 02:54:15.240312 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:54:15.240323 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-16 02:54:15.240335 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-16 02:54:15.240345 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-16 02:54:15.240356 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:54:15.240367 | orchestrator | 2026-02-16 02:54:15.240380 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-16 02:54:15.240392 | orchestrator | Monday 16 February 2026 02:53:17 +0000 (0:00:00.578) 0:05:15.091 ******* 2026-02-16 02:54:15.240403 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:15.240414 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:15.240425 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:15.240436 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:15.240449 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:15.240472 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:15.240533 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:54:15.240553 | orchestrator | 2026-02-16 02:54:15.240571 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-16 02:54:15.240589 | orchestrator | Monday 16 February 2026 02:53:23 +0000 (0:00:06.277) 0:05:21.368 ******* 2026-02-16 02:54:15.240609 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:15.240627 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:15.240646 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:15.240665 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:15.240683 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:15.240700 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:15.240719 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:54:15.240738 | orchestrator | 2026-02-16 02:54:15.240757 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-16 02:54:15.240776 | orchestrator | Monday 16 February 2026 02:53:24 +0000 (0:00:01.019) 0:05:22.388 ******* 2026-02-16 02:54:15.240794 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:15.240813 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:15.240831 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:15.240849 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:15.240868 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:54:15.240887 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:15.240906 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:15.240926 | orchestrator | 2026-02-16 02:54:15.240944 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-16 02:54:15.240990 | orchestrator | Monday 16 February 2026 02:53:32 +0000 (0:00:07.950) 0:05:30.339 ******* 2026-02-16 02:54:15.241009 | orchestrator | changed: [testbed-manager] 2026-02-16 02:54:15.241028 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:15.241047 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:15.241067 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:15.241085 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:15.241104 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:15.241122 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:54:15.241141 | orchestrator | 2026-02-16 02:54:15.241156 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-16 02:54:15.241167 | orchestrator | Monday 16 February 2026 02:53:35 +0000 (0:00:03.283) 0:05:33.622 ******* 2026-02-16 02:54:15.241178 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:15.241189 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:15.241200 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:15.241211 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:15.241221 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:15.241232 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:15.241243 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:54:15.241253 | orchestrator | 2026-02-16 02:54:15.241264 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-16 02:54:15.241275 | orchestrator | Monday 16 February 2026 02:53:37 +0000 (0:00:01.267) 0:05:34.890 ******* 2026-02-16 02:54:15.241286 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:15.241297 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:15.241307 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:15.241318 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:15.241329 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:15.241339 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:15.241350 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:54:15.241361 | orchestrator | 2026-02-16 02:54:15.241372 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-16 02:54:15.241383 | orchestrator | Monday 16 February 2026 02:53:38 +0000 (0:00:01.487) 0:05:36.377 ******* 2026-02-16 02:54:15.241394 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:54:15.241404 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:54:15.241415 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:54:15.241426 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:54:15.241447 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:54:15.241457 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:54:15.241468 | orchestrator | changed: [testbed-manager] 2026-02-16 02:54:15.241479 | orchestrator | 2026-02-16 02:54:15.241490 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-16 02:54:15.241501 | orchestrator | Monday 16 February 2026 02:53:39 +0000 (0:00:00.602) 0:05:36.979 ******* 2026-02-16 02:54:15.241511 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:15.241522 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:15.241533 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:54:15.241543 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:15.241554 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:15.241564 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:15.241575 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:15.241586 | orchestrator | 2026-02-16 02:54:15.241597 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-16 02:54:15.241627 | orchestrator | Monday 16 February 2026 02:53:48 +0000 (0:00:09.271) 0:05:46.251 ******* 2026-02-16 02:54:15.241639 | orchestrator | changed: [testbed-manager] 2026-02-16 02:54:15.241650 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:15.241661 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:15.241671 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:15.241682 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:15.241693 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:15.241703 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:54:15.241714 | orchestrator | 2026-02-16 02:54:15.241725 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-16 02:54:15.241736 | orchestrator | Monday 16 February 2026 02:53:49 +0000 (0:00:00.865) 0:05:47.116 ******* 2026-02-16 02:54:15.241751 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:15.241770 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:15.241788 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:15.241806 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:15.241823 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:54:15.241840 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:15.241859 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:15.241877 | orchestrator | 2026-02-16 02:54:15.241897 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-16 02:54:15.241916 | orchestrator | Monday 16 February 2026 02:53:58 +0000 (0:00:08.732) 0:05:55.849 ******* 2026-02-16 02:54:15.241934 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:15.241978 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:15.241990 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:15.242001 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:15.242011 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:15.242084 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:15.242096 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:54:15.242106 | orchestrator | 2026-02-16 02:54:15.242118 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-16 02:54:15.242128 | orchestrator | Monday 16 February 2026 02:54:08 +0000 (0:00:10.791) 0:06:06.640 ******* 2026-02-16 02:54:15.242139 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-16 02:54:15.242151 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-16 02:54:15.242161 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-16 02:54:15.242172 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-16 02:54:15.242183 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-16 02:54:15.242194 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-16 02:54:15.242205 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-16 02:54:15.242216 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-16 02:54:15.242226 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-16 02:54:15.242248 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-16 02:54:15.242259 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-16 02:54:15.242319 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-16 02:54:15.242332 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-16 02:54:15.242343 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-16 02:54:15.242353 | orchestrator | 2026-02-16 02:54:15.242364 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-16 02:54:15.242375 | orchestrator | Monday 16 February 2026 02:54:10 +0000 (0:00:01.216) 0:06:07.857 ******* 2026-02-16 02:54:15.242390 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:54:15.242402 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:54:15.242413 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:54:15.242423 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:54:15.242434 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:54:15.242445 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:54:15.242455 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:54:15.242466 | orchestrator | 2026-02-16 02:54:15.242477 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-16 02:54:15.242488 | orchestrator | Monday 16 February 2026 02:54:10 +0000 (0:00:00.502) 0:06:08.360 ******* 2026-02-16 02:54:15.242499 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:15.242510 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:15.242520 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:15.242531 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:15.242542 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:15.242553 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:15.242563 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:54:15.242574 | orchestrator | 2026-02-16 02:54:15.242585 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-16 02:54:15.242597 | orchestrator | Monday 16 February 2026 02:54:14 +0000 (0:00:03.635) 0:06:11.995 ******* 2026-02-16 02:54:15.242608 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:54:15.242619 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:54:15.242629 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:54:15.242640 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:54:15.242650 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:54:15.242661 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:54:15.242672 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:54:15.242682 | orchestrator | 2026-02-16 02:54:15.242694 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-16 02:54:15.242705 | orchestrator | Monday 16 February 2026 02:54:14 +0000 (0:00:00.462) 0:06:12.457 ******* 2026-02-16 02:54:15.242716 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-16 02:54:15.242727 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-16 02:54:15.242738 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:54:15.242748 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-16 02:54:15.242759 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-16 02:54:15.242770 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:54:15.242780 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-16 02:54:15.242791 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-16 02:54:15.242802 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:54:15.242825 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-16 02:54:33.529068 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-16 02:54:33.529173 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:54:33.529189 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-16 02:54:33.529201 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-16 02:54:33.529212 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:54:33.529251 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-16 02:54:33.529263 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-16 02:54:33.529274 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:54:33.529284 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-16 02:54:33.529295 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-16 02:54:33.529305 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:54:33.529316 | orchestrator | 2026-02-16 02:54:33.529329 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-16 02:54:33.529341 | orchestrator | Monday 16 February 2026 02:54:15 +0000 (0:00:00.695) 0:06:13.153 ******* 2026-02-16 02:54:33.529353 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:54:33.529363 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:54:33.529373 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:54:33.529384 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:54:33.529394 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:54:33.529405 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:54:33.529415 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:54:33.529426 | orchestrator | 2026-02-16 02:54:33.529442 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-16 02:54:33.529460 | orchestrator | Monday 16 February 2026 02:54:15 +0000 (0:00:00.474) 0:06:13.628 ******* 2026-02-16 02:54:33.529479 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:54:33.529496 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:54:33.529514 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:54:33.529532 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:54:33.529549 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:54:33.529565 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:54:33.529583 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:54:33.529601 | orchestrator | 2026-02-16 02:54:33.529620 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-16 02:54:33.529639 | orchestrator | Monday 16 February 2026 02:54:16 +0000 (0:00:00.448) 0:06:14.076 ******* 2026-02-16 02:54:33.529658 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:54:33.529677 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:54:33.529694 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:54:33.529714 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:54:33.529725 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:54:33.529735 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:54:33.529745 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:54:33.529756 | orchestrator | 2026-02-16 02:54:33.529767 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-16 02:54:33.529778 | orchestrator | Monday 16 February 2026 02:54:16 +0000 (0:00:00.485) 0:06:14.562 ******* 2026-02-16 02:54:33.529789 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:33.529799 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:54:33.529810 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:54:33.529820 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:54:33.529831 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:54:33.529842 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:54:33.529852 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:54:33.529862 | orchestrator | 2026-02-16 02:54:33.529873 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-16 02:54:33.529884 | orchestrator | Monday 16 February 2026 02:54:18 +0000 (0:00:01.849) 0:06:16.412 ******* 2026-02-16 02:54:33.529896 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:54:33.529909 | orchestrator | 2026-02-16 02:54:33.529920 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-16 02:54:33.529931 | orchestrator | Monday 16 February 2026 02:54:19 +0000 (0:00:00.811) 0:06:17.224 ******* 2026-02-16 02:54:33.529964 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:33.529975 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:33.530012 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:33.530089 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:33.530101 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:33.530112 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:33.530122 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:54:33.530133 | orchestrator | 2026-02-16 02:54:33.530144 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-16 02:54:33.530154 | orchestrator | Monday 16 February 2026 02:54:20 +0000 (0:00:00.798) 0:06:18.022 ******* 2026-02-16 02:54:33.530164 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:33.530175 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:33.530186 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:33.530196 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:33.530206 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:33.530217 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:33.530227 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:54:33.530238 | orchestrator | 2026-02-16 02:54:33.530248 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-16 02:54:33.530259 | orchestrator | Monday 16 February 2026 02:54:21 +0000 (0:00:00.807) 0:06:18.829 ******* 2026-02-16 02:54:33.530269 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:33.530280 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:33.530290 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:33.530301 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:33.530311 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:33.530321 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:33.530332 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:54:33.530342 | orchestrator | 2026-02-16 02:54:33.530353 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-16 02:54:33.530384 | orchestrator | Monday 16 February 2026 02:54:22 +0000 (0:00:01.472) 0:06:20.302 ******* 2026-02-16 02:54:33.530396 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:54:33.530407 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:54:33.530418 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:54:33.530428 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:54:33.530439 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:54:33.530450 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:54:33.530460 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:54:33.530471 | orchestrator | 2026-02-16 02:54:33.530482 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-16 02:54:33.530493 | orchestrator | Monday 16 February 2026 02:54:23 +0000 (0:00:01.341) 0:06:21.644 ******* 2026-02-16 02:54:33.530504 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:33.530514 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:33.530525 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:33.530536 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:33.530546 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:33.530557 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:33.530568 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:54:33.530578 | orchestrator | 2026-02-16 02:54:33.530589 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-16 02:54:33.530600 | orchestrator | Monday 16 February 2026 02:54:25 +0000 (0:00:01.312) 0:06:22.957 ******* 2026-02-16 02:54:33.530611 | orchestrator | changed: [testbed-manager] 2026-02-16 02:54:33.530621 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:33.530632 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:33.530647 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:33.530667 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:33.530685 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:33.530703 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:54:33.530721 | orchestrator | 2026-02-16 02:54:33.530752 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-16 02:54:33.530770 | orchestrator | Monday 16 February 2026 02:54:26 +0000 (0:00:01.401) 0:06:24.358 ******* 2026-02-16 02:54:33.530787 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:54:33.530807 | orchestrator | 2026-02-16 02:54:33.530825 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-16 02:54:33.530844 | orchestrator | Monday 16 February 2026 02:54:27 +0000 (0:00:00.936) 0:06:25.294 ******* 2026-02-16 02:54:33.530864 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:33.530881 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:54:33.530900 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:54:33.530917 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:54:33.530935 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:54:33.530954 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:54:33.530971 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:54:33.531009 | orchestrator | 2026-02-16 02:54:33.531029 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-16 02:54:33.531047 | orchestrator | Monday 16 February 2026 02:54:28 +0000 (0:00:01.344) 0:06:26.638 ******* 2026-02-16 02:54:33.531065 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:33.531084 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:54:33.531103 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:54:33.531121 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:54:33.531135 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:54:33.531160 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:54:33.531171 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:54:33.531181 | orchestrator | 2026-02-16 02:54:33.531192 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-16 02:54:33.531203 | orchestrator | Monday 16 February 2026 02:54:30 +0000 (0:00:01.080) 0:06:27.719 ******* 2026-02-16 02:54:33.531214 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:33.531225 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:54:33.531235 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:54:33.531246 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:54:33.531256 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:54:33.531267 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:54:33.531277 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:54:33.531288 | orchestrator | 2026-02-16 02:54:33.531299 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-16 02:54:33.531310 | orchestrator | Monday 16 February 2026 02:54:31 +0000 (0:00:01.060) 0:06:28.779 ******* 2026-02-16 02:54:33.531320 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:54:33.531331 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:33.531341 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:54:33.531352 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:54:33.531362 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:54:33.531373 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:54:33.531383 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:54:33.531394 | orchestrator | 2026-02-16 02:54:33.531405 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-16 02:54:33.531415 | orchestrator | Monday 16 February 2026 02:54:32 +0000 (0:00:01.260) 0:06:30.040 ******* 2026-02-16 02:54:33.531442 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:54:33.531464 | orchestrator | 2026-02-16 02:54:33.531475 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-16 02:54:33.531486 | orchestrator | Monday 16 February 2026 02:54:33 +0000 (0:00:00.833) 0:06:30.873 ******* 2026-02-16 02:54:33.531497 | orchestrator | 2026-02-16 02:54:33.531507 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-16 02:54:33.531527 | orchestrator | Monday 16 February 2026 02:54:33 +0000 (0:00:00.040) 0:06:30.913 ******* 2026-02-16 02:54:33.531538 | orchestrator | 2026-02-16 02:54:33.531549 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-16 02:54:33.531559 | orchestrator | Monday 16 February 2026 02:54:33 +0000 (0:00:00.048) 0:06:30.962 ******* 2026-02-16 02:54:33.531570 | orchestrator | 2026-02-16 02:54:33.531581 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-16 02:54:33.531604 | orchestrator | Monday 16 February 2026 02:54:33 +0000 (0:00:00.039) 0:06:31.001 ******* 2026-02-16 02:54:59.134240 | orchestrator | 2026-02-16 02:54:59.134346 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-16 02:54:59.134364 | orchestrator | Monday 16 February 2026 02:54:33 +0000 (0:00:00.058) 0:06:31.060 ******* 2026-02-16 02:54:59.134376 | orchestrator | 2026-02-16 02:54:59.134388 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-16 02:54:59.134399 | orchestrator | Monday 16 February 2026 02:54:33 +0000 (0:00:00.046) 0:06:31.106 ******* 2026-02-16 02:54:59.134411 | orchestrator | 2026-02-16 02:54:59.134422 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-16 02:54:59.134433 | orchestrator | Monday 16 February 2026 02:54:33 +0000 (0:00:00.038) 0:06:31.145 ******* 2026-02-16 02:54:59.134444 | orchestrator | 2026-02-16 02:54:59.134455 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-16 02:54:59.134466 | orchestrator | Monday 16 February 2026 02:54:33 +0000 (0:00:00.038) 0:06:31.183 ******* 2026-02-16 02:54:59.134478 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:54:59.134490 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:54:59.134501 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:54:59.134512 | orchestrator | 2026-02-16 02:54:59.134523 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-16 02:54:59.134534 | orchestrator | Monday 16 February 2026 02:54:34 +0000 (0:00:01.195) 0:06:32.379 ******* 2026-02-16 02:54:59.134545 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:59.134557 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:59.134568 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:59.134579 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:59.134590 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:59.134601 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:54:59.134612 | orchestrator | changed: [testbed-manager] 2026-02-16 02:54:59.134622 | orchestrator | 2026-02-16 02:54:59.134633 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-16 02:54:59.134644 | orchestrator | Monday 16 February 2026 02:54:36 +0000 (0:00:02.001) 0:06:34.380 ******* 2026-02-16 02:54:59.134655 | orchestrator | changed: [testbed-manager] 2026-02-16 02:54:59.134666 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:59.134677 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:59.134688 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:59.134699 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:59.134709 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:59.134720 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:54:59.134731 | orchestrator | 2026-02-16 02:54:59.134742 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-16 02:54:59.134753 | orchestrator | Monday 16 February 2026 02:54:37 +0000 (0:00:01.162) 0:06:35.543 ******* 2026-02-16 02:54:59.134764 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:54:59.134776 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:59.134789 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:59.134801 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:59.134814 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:59.134825 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:59.134838 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:54:59.134851 | orchestrator | 2026-02-16 02:54:59.134863 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-16 02:54:59.134876 | orchestrator | Monday 16 February 2026 02:54:40 +0000 (0:00:02.270) 0:06:37.813 ******* 2026-02-16 02:54:59.134928 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:54:59.134942 | orchestrator | 2026-02-16 02:54:59.134955 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-16 02:54:59.134967 | orchestrator | Monday 16 February 2026 02:54:40 +0000 (0:00:00.112) 0:06:37.925 ******* 2026-02-16 02:54:59.134979 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:59.134992 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:59.135005 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:59.135018 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:59.135030 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:59.135078 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:59.135091 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:54:59.135103 | orchestrator | 2026-02-16 02:54:59.135115 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-16 02:54:59.135129 | orchestrator | Monday 16 February 2026 02:54:41 +0000 (0:00:00.955) 0:06:38.881 ******* 2026-02-16 02:54:59.135139 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:54:59.135150 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:54:59.135161 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:54:59.135172 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:54:59.135183 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:54:59.135193 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:54:59.135204 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:54:59.135215 | orchestrator | 2026-02-16 02:54:59.135226 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-16 02:54:59.135238 | orchestrator | Monday 16 February 2026 02:54:41 +0000 (0:00:00.480) 0:06:39.362 ******* 2026-02-16 02:54:59.135258 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:54:59.135280 | orchestrator | 2026-02-16 02:54:59.135300 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-16 02:54:59.135319 | orchestrator | Monday 16 February 2026 02:54:42 +0000 (0:00:00.977) 0:06:40.339 ******* 2026-02-16 02:54:59.135338 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:59.135357 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:54:59.135376 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:54:59.135395 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:54:59.135413 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:54:59.135433 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:54:59.135453 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:54:59.135472 | orchestrator | 2026-02-16 02:54:59.135489 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-16 02:54:59.135500 | orchestrator | Monday 16 February 2026 02:54:43 +0000 (0:00:00.896) 0:06:41.236 ******* 2026-02-16 02:54:59.135511 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-16 02:54:59.135541 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-16 02:54:59.135553 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-16 02:54:59.135564 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-16 02:54:59.135575 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-16 02:54:59.135585 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-16 02:54:59.135596 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-16 02:54:59.135607 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-16 02:54:59.135618 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-16 02:54:59.135629 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-16 02:54:59.135640 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-16 02:54:59.135650 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-16 02:54:59.135673 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-16 02:54:59.135684 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-16 02:54:59.135695 | orchestrator | 2026-02-16 02:54:59.135706 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-16 02:54:59.135716 | orchestrator | Monday 16 February 2026 02:54:45 +0000 (0:00:02.313) 0:06:43.549 ******* 2026-02-16 02:54:59.135727 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:54:59.135738 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:54:59.135749 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:54:59.135760 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:54:59.135770 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:54:59.135781 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:54:59.135791 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:54:59.135802 | orchestrator | 2026-02-16 02:54:59.135813 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-16 02:54:59.135824 | orchestrator | Monday 16 February 2026 02:54:46 +0000 (0:00:00.665) 0:06:44.215 ******* 2026-02-16 02:54:59.135836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:54:59.135849 | orchestrator | 2026-02-16 02:54:59.135860 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-16 02:54:59.135871 | orchestrator | Monday 16 February 2026 02:54:47 +0000 (0:00:00.804) 0:06:45.019 ******* 2026-02-16 02:54:59.135882 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:59.135892 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:54:59.135903 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:54:59.135914 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:54:59.135925 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:54:59.135935 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:54:59.135946 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:54:59.135957 | orchestrator | 2026-02-16 02:54:59.135967 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-16 02:54:59.135978 | orchestrator | Monday 16 February 2026 02:54:48 +0000 (0:00:00.853) 0:06:45.873 ******* 2026-02-16 02:54:59.135996 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:59.136007 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:54:59.136018 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:54:59.136029 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:54:59.136064 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:54:59.136083 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:54:59.136094 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:54:59.136104 | orchestrator | 2026-02-16 02:54:59.136116 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-16 02:54:59.136127 | orchestrator | Monday 16 February 2026 02:54:49 +0000 (0:00:01.002) 0:06:46.875 ******* 2026-02-16 02:54:59.136138 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:54:59.136149 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:54:59.136160 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:54:59.136170 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:54:59.136181 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:54:59.136192 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:54:59.136203 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:54:59.136214 | orchestrator | 2026-02-16 02:54:59.136224 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-16 02:54:59.136235 | orchestrator | Monday 16 February 2026 02:54:49 +0000 (0:00:00.490) 0:06:47.366 ******* 2026-02-16 02:54:59.136246 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:59.136257 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:54:59.136268 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:54:59.136279 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:54:59.136290 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:54:59.136308 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:54:59.136319 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:54:59.136330 | orchestrator | 2026-02-16 02:54:59.136341 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-16 02:54:59.136382 | orchestrator | Monday 16 February 2026 02:54:51 +0000 (0:00:01.417) 0:06:48.784 ******* 2026-02-16 02:54:59.136411 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:54:59.136428 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:54:59.136446 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:54:59.136463 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:54:59.136480 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:54:59.136498 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:54:59.136517 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:54:59.136534 | orchestrator | 2026-02-16 02:54:59.136551 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-16 02:54:59.136569 | orchestrator | Monday 16 February 2026 02:54:51 +0000 (0:00:00.528) 0:06:49.313 ******* 2026-02-16 02:54:59.136587 | orchestrator | ok: [testbed-manager] 2026-02-16 02:54:59.136604 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:54:59.136622 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:54:59.136642 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:54:59.136660 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:54:59.136676 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:54:59.136698 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:55:30.179096 | orchestrator | 2026-02-16 02:55:30.179270 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-16 02:55:30.179288 | orchestrator | Monday 16 February 2026 02:54:59 +0000 (0:00:07.481) 0:06:56.795 ******* 2026-02-16 02:55:30.179300 | orchestrator | ok: [testbed-manager] 2026-02-16 02:55:30.179313 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:55:30.179325 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:55:30.179336 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:55:30.179347 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:55:30.179358 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:55:30.179369 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:55:30.179380 | orchestrator | 2026-02-16 02:55:30.179392 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-16 02:55:30.179404 | orchestrator | Monday 16 February 2026 02:55:00 +0000 (0:00:01.439) 0:06:58.235 ******* 2026-02-16 02:55:30.179415 | orchestrator | ok: [testbed-manager] 2026-02-16 02:55:30.179426 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:55:30.179437 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:55:30.179448 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:55:30.179459 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:55:30.179469 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:55:30.179480 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:55:30.179491 | orchestrator | 2026-02-16 02:55:30.179503 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-16 02:55:30.179514 | orchestrator | Monday 16 February 2026 02:55:02 +0000 (0:00:01.694) 0:06:59.929 ******* 2026-02-16 02:55:30.179525 | orchestrator | ok: [testbed-manager] 2026-02-16 02:55:30.179536 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:55:30.179547 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:55:30.179558 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:55:30.179569 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:55:30.179580 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:55:30.179591 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:55:30.179603 | orchestrator | 2026-02-16 02:55:30.179620 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-16 02:55:30.179639 | orchestrator | Monday 16 February 2026 02:55:03 +0000 (0:00:01.592) 0:07:01.522 ******* 2026-02-16 02:55:30.179659 | orchestrator | ok: [testbed-manager] 2026-02-16 02:55:30.179678 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:55:30.179696 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:55:30.179745 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:55:30.179762 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:55:30.179779 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:55:30.179797 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:55:30.179816 | orchestrator | 2026-02-16 02:55:30.179834 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-16 02:55:30.179851 | orchestrator | Monday 16 February 2026 02:55:04 +0000 (0:00:00.801) 0:07:02.323 ******* 2026-02-16 02:55:30.179867 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:55:30.179887 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:55:30.179906 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:55:30.179923 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:55:30.179940 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:55:30.179951 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:55:30.179961 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:55:30.179972 | orchestrator | 2026-02-16 02:55:30.179983 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-16 02:55:30.179995 | orchestrator | Monday 16 February 2026 02:55:05 +0000 (0:00:00.949) 0:07:03.273 ******* 2026-02-16 02:55:30.180005 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:55:30.180016 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:55:30.180027 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:55:30.180038 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:55:30.180048 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:55:30.180059 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:55:30.180070 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:55:30.180081 | orchestrator | 2026-02-16 02:55:30.180092 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-16 02:55:30.180130 | orchestrator | Monday 16 February 2026 02:55:06 +0000 (0:00:00.510) 0:07:03.784 ******* 2026-02-16 02:55:30.180150 | orchestrator | ok: [testbed-manager] 2026-02-16 02:55:30.180190 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:55:30.180208 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:55:30.180225 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:55:30.180240 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:55:30.180259 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:55:30.180278 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:55:30.180297 | orchestrator | 2026-02-16 02:55:30.180315 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-16 02:55:30.180335 | orchestrator | Monday 16 February 2026 02:55:06 +0000 (0:00:00.458) 0:07:04.242 ******* 2026-02-16 02:55:30.180352 | orchestrator | ok: [testbed-manager] 2026-02-16 02:55:30.180371 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:55:30.180389 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:55:30.180408 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:55:30.180426 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:55:30.180445 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:55:30.180464 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:55:30.180482 | orchestrator | 2026-02-16 02:55:30.180500 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-16 02:55:30.180515 | orchestrator | Monday 16 February 2026 02:55:07 +0000 (0:00:00.512) 0:07:04.754 ******* 2026-02-16 02:55:30.180526 | orchestrator | ok: [testbed-manager] 2026-02-16 02:55:30.180537 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:55:30.180547 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:55:30.180558 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:55:30.180568 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:55:30.180579 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:55:30.180590 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:55:30.180601 | orchestrator | 2026-02-16 02:55:30.180612 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-16 02:55:30.180623 | orchestrator | Monday 16 February 2026 02:55:07 +0000 (0:00:00.648) 0:07:05.403 ******* 2026-02-16 02:55:30.180633 | orchestrator | ok: [testbed-manager] 2026-02-16 02:55:30.180644 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:55:30.180667 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:55:30.180678 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:55:30.180688 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:55:30.180699 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:55:30.180710 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:55:30.180720 | orchestrator | 2026-02-16 02:55:30.180753 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-16 02:55:30.180765 | orchestrator | Monday 16 February 2026 02:55:13 +0000 (0:00:05.546) 0:07:10.949 ******* 2026-02-16 02:55:30.180776 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:55:30.180787 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:55:30.180798 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:55:30.180809 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:55:30.180819 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:55:30.180830 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:55:30.180841 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:55:30.180852 | orchestrator | 2026-02-16 02:55:30.180863 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-16 02:55:30.180874 | orchestrator | Monday 16 February 2026 02:55:13 +0000 (0:00:00.495) 0:07:11.445 ******* 2026-02-16 02:55:30.180886 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:55:30.180899 | orchestrator | 2026-02-16 02:55:30.180910 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-16 02:55:30.180921 | orchestrator | Monday 16 February 2026 02:55:14 +0000 (0:00:00.945) 0:07:12.391 ******* 2026-02-16 02:55:30.180932 | orchestrator | ok: [testbed-manager] 2026-02-16 02:55:30.180943 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:55:30.180953 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:55:30.180964 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:55:30.180975 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:55:30.180986 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:55:30.180996 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:55:30.181007 | orchestrator | 2026-02-16 02:55:30.181018 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-16 02:55:30.181029 | orchestrator | Monday 16 February 2026 02:55:16 +0000 (0:00:01.796) 0:07:14.188 ******* 2026-02-16 02:55:30.181039 | orchestrator | ok: [testbed-manager] 2026-02-16 02:55:30.181050 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:55:30.181061 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:55:30.181072 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:55:30.181082 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:55:30.181093 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:55:30.181137 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:55:30.181149 | orchestrator | 2026-02-16 02:55:30.181160 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-16 02:55:30.181171 | orchestrator | Monday 16 February 2026 02:55:17 +0000 (0:00:01.078) 0:07:15.267 ******* 2026-02-16 02:55:30.181181 | orchestrator | ok: [testbed-manager] 2026-02-16 02:55:30.181192 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:55:30.181203 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:55:30.181213 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:55:30.181224 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:55:30.181235 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:55:30.181246 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:55:30.181257 | orchestrator | 2026-02-16 02:55:30.181273 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-16 02:55:30.181293 | orchestrator | Monday 16 February 2026 02:55:18 +0000 (0:00:00.804) 0:07:16.071 ******* 2026-02-16 02:55:30.181320 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-16 02:55:30.181339 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-16 02:55:30.181368 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-16 02:55:30.181390 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-16 02:55:30.181411 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-16 02:55:30.181430 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-16 02:55:30.181444 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-16 02:55:30.181454 | orchestrator | 2026-02-16 02:55:30.181466 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-16 02:55:30.181476 | orchestrator | Monday 16 February 2026 02:55:20 +0000 (0:00:01.852) 0:07:17.923 ******* 2026-02-16 02:55:30.181487 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:55:30.181499 | orchestrator | 2026-02-16 02:55:30.181510 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-16 02:55:30.181521 | orchestrator | Monday 16 February 2026 02:55:21 +0000 (0:00:00.811) 0:07:18.734 ******* 2026-02-16 02:55:30.181532 | orchestrator | changed: [testbed-manager] 2026-02-16 02:55:30.181543 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:55:30.181554 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:55:30.181565 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:55:30.181576 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:55:30.181586 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:55:30.181597 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:55:30.181608 | orchestrator | 2026-02-16 02:55:30.181628 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-16 02:56:00.526478 | orchestrator | Monday 16 February 2026 02:55:30 +0000 (0:00:09.106) 0:07:27.840 ******* 2026-02-16 02:56:00.526626 | orchestrator | ok: [testbed-manager] 2026-02-16 02:56:00.526644 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:56:00.526656 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:56:00.526667 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:56:00.526677 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:56:00.526688 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:56:00.526699 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:56:00.526710 | orchestrator | 2026-02-16 02:56:00.526722 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-16 02:56:00.526734 | orchestrator | Monday 16 February 2026 02:55:32 +0000 (0:00:01.873) 0:07:29.714 ******* 2026-02-16 02:56:00.526745 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:56:00.526756 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:56:00.526766 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:56:00.526777 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:56:00.526788 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:56:00.526798 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:56:00.526809 | orchestrator | 2026-02-16 02:56:00.526820 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-16 02:56:00.526831 | orchestrator | Monday 16 February 2026 02:55:33 +0000 (0:00:01.209) 0:07:30.923 ******* 2026-02-16 02:56:00.526842 | orchestrator | changed: [testbed-manager] 2026-02-16 02:56:00.526854 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:56:00.526865 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:56:00.526875 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:56:00.526886 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:56:00.526921 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:56:00.526933 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:56:00.526944 | orchestrator | 2026-02-16 02:56:00.526955 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-16 02:56:00.526965 | orchestrator | 2026-02-16 02:56:00.526976 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-16 02:56:00.526987 | orchestrator | Monday 16 February 2026 02:55:34 +0000 (0:00:01.298) 0:07:32.222 ******* 2026-02-16 02:56:00.526998 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:56:00.527008 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:56:00.527019 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:56:00.527029 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:56:00.527040 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:56:00.527050 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:56:00.527061 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:56:00.527072 | orchestrator | 2026-02-16 02:56:00.527082 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-16 02:56:00.527093 | orchestrator | 2026-02-16 02:56:00.527104 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-16 02:56:00.527115 | orchestrator | Monday 16 February 2026 02:55:35 +0000 (0:00:00.663) 0:07:32.885 ******* 2026-02-16 02:56:00.527125 | orchestrator | changed: [testbed-manager] 2026-02-16 02:56:00.527136 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:56:00.527147 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:56:00.527157 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:56:00.527271 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:56:00.527291 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:56:00.527310 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:56:00.527329 | orchestrator | 2026-02-16 02:56:00.527349 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-16 02:56:00.527385 | orchestrator | Monday 16 February 2026 02:55:36 +0000 (0:00:01.345) 0:07:34.230 ******* 2026-02-16 02:56:00.527405 | orchestrator | ok: [testbed-manager] 2026-02-16 02:56:00.527424 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:56:00.527442 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:56:00.527460 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:56:00.527478 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:56:00.527497 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:56:00.527516 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:56:00.527535 | orchestrator | 2026-02-16 02:56:00.527553 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-16 02:56:00.527572 | orchestrator | Monday 16 February 2026 02:55:37 +0000 (0:00:01.385) 0:07:35.616 ******* 2026-02-16 02:56:00.527591 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:56:00.527608 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:56:00.527628 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:56:00.527645 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:56:00.527663 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:56:00.527681 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:56:00.527701 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:56:00.527720 | orchestrator | 2026-02-16 02:56:00.527738 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-16 02:56:00.527757 | orchestrator | Monday 16 February 2026 02:55:38 +0000 (0:00:00.476) 0:07:36.093 ******* 2026-02-16 02:56:00.527776 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:56:00.527795 | orchestrator | 2026-02-16 02:56:00.527812 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-16 02:56:00.527832 | orchestrator | Monday 16 February 2026 02:55:39 +0000 (0:00:00.949) 0:07:37.043 ******* 2026-02-16 02:56:00.527853 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:56:00.527890 | orchestrator | 2026-02-16 02:56:00.527910 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-16 02:56:00.527922 | orchestrator | Monday 16 February 2026 02:55:40 +0000 (0:00:00.783) 0:07:37.826 ******* 2026-02-16 02:56:00.527933 | orchestrator | changed: [testbed-manager] 2026-02-16 02:56:00.527943 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:56:00.527954 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:56:00.527965 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:56:00.527976 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:56:00.527987 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:56:00.527997 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:56:00.528008 | orchestrator | 2026-02-16 02:56:00.528041 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-16 02:56:00.528052 | orchestrator | Monday 16 February 2026 02:55:48 +0000 (0:00:08.595) 0:07:46.422 ******* 2026-02-16 02:56:00.528063 | orchestrator | changed: [testbed-manager] 2026-02-16 02:56:00.528075 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:56:00.528085 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:56:00.528096 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:56:00.528107 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:56:00.528118 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:56:00.528128 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:56:00.528139 | orchestrator | 2026-02-16 02:56:00.528151 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-16 02:56:00.528186 | orchestrator | Monday 16 February 2026 02:55:49 +0000 (0:00:00.832) 0:07:47.254 ******* 2026-02-16 02:56:00.528198 | orchestrator | changed: [testbed-manager] 2026-02-16 02:56:00.528209 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:56:00.528220 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:56:00.528231 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:56:00.528242 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:56:00.528253 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:56:00.528263 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:56:00.528274 | orchestrator | 2026-02-16 02:56:00.528285 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-16 02:56:00.528296 | orchestrator | Monday 16 February 2026 02:55:50 +0000 (0:00:01.298) 0:07:48.553 ******* 2026-02-16 02:56:00.528307 | orchestrator | changed: [testbed-manager] 2026-02-16 02:56:00.528318 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:56:00.528329 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:56:00.528346 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:56:00.528366 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:56:00.528383 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:56:00.528401 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:56:00.528420 | orchestrator | 2026-02-16 02:56:00.528440 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-16 02:56:00.528458 | orchestrator | Monday 16 February 2026 02:55:52 +0000 (0:00:01.805) 0:07:50.359 ******* 2026-02-16 02:56:00.528477 | orchestrator | changed: [testbed-manager] 2026-02-16 02:56:00.528497 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:56:00.528514 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:56:00.528531 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:56:00.528542 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:56:00.528553 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:56:00.528564 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:56:00.528574 | orchestrator | 2026-02-16 02:56:00.528585 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-16 02:56:00.528596 | orchestrator | Monday 16 February 2026 02:55:54 +0000 (0:00:02.092) 0:07:52.451 ******* 2026-02-16 02:56:00.528607 | orchestrator | changed: [testbed-manager] 2026-02-16 02:56:00.528618 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:56:00.528639 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:56:00.528650 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:56:00.528660 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:56:00.528671 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:56:00.528681 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:56:00.528692 | orchestrator | 2026-02-16 02:56:00.528703 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-16 02:56:00.528713 | orchestrator | 2026-02-16 02:56:00.528742 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-16 02:56:00.528753 | orchestrator | Monday 16 February 2026 02:55:55 +0000 (0:00:01.079) 0:07:53.531 ******* 2026-02-16 02:56:00.528765 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:56:00.528776 | orchestrator | 2026-02-16 02:56:00.528786 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-16 02:56:00.528797 | orchestrator | Monday 16 February 2026 02:55:56 +0000 (0:00:00.820) 0:07:54.351 ******* 2026-02-16 02:56:00.528808 | orchestrator | ok: [testbed-manager] 2026-02-16 02:56:00.528819 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:56:00.528829 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:56:00.528840 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:56:00.528850 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:56:00.528861 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:56:00.528872 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:56:00.528882 | orchestrator | 2026-02-16 02:56:00.528893 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-16 02:56:00.528904 | orchestrator | Monday 16 February 2026 02:55:57 +0000 (0:00:01.066) 0:07:55.417 ******* 2026-02-16 02:56:00.528915 | orchestrator | changed: [testbed-manager] 2026-02-16 02:56:00.528925 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:56:00.528936 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:56:00.528947 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:56:00.528958 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:56:00.528968 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:56:00.528979 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:56:00.528989 | orchestrator | 2026-02-16 02:56:00.529000 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-16 02:56:00.529011 | orchestrator | Monday 16 February 2026 02:55:58 +0000 (0:00:01.077) 0:07:56.494 ******* 2026-02-16 02:56:00.529022 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 02:56:00.529033 | orchestrator | 2026-02-16 02:56:00.529044 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-16 02:56:00.529054 | orchestrator | Monday 16 February 2026 02:55:59 +0000 (0:00:00.916) 0:07:57.410 ******* 2026-02-16 02:56:00.529065 | orchestrator | ok: [testbed-manager] 2026-02-16 02:56:00.529076 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:56:00.529086 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:56:00.529097 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:56:00.529107 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:56:00.529118 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:56:00.529129 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:56:00.529139 | orchestrator | 2026-02-16 02:56:00.529188 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-16 02:56:01.935629 | orchestrator | Monday 16 February 2026 02:56:00 +0000 (0:00:00.775) 0:07:58.186 ******* 2026-02-16 02:56:01.935727 | orchestrator | changed: [testbed-manager] 2026-02-16 02:56:01.935742 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:56:01.935753 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:56:01.935763 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:56:01.935772 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:56:01.935782 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:56:01.935792 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:56:01.935833 | orchestrator | 2026-02-16 02:56:01.935845 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 02:56:01.935857 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-16 02:56:01.935871 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-16 02:56:01.935887 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-16 02:56:01.935909 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-16 02:56:01.935932 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-16 02:56:01.935948 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-16 02:56:01.935963 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-16 02:56:01.935981 | orchestrator | 2026-02-16 02:56:01.935995 | orchestrator | 2026-02-16 02:56:01.936011 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 02:56:01.936028 | orchestrator | Monday 16 February 2026 02:56:01 +0000 (0:00:01.023) 0:07:59.209 ******* 2026-02-16 02:56:01.936044 | orchestrator | =============================================================================== 2026-02-16 02:56:01.936061 | orchestrator | osism.commons.packages : Install required packages --------------------- 86.08s 2026-02-16 02:56:01.936078 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.23s 2026-02-16 02:56:01.936096 | orchestrator | osism.commons.packages : Download required packages -------------------- 33.60s 2026-02-16 02:56:01.936112 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.49s 2026-02-16 02:56:01.936129 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.09s 2026-02-16 02:56:01.936156 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.79s 2026-02-16 02:56:01.936205 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.63s 2026-02-16 02:56:01.936219 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.27s 2026-02-16 02:56:01.936231 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.11s 2026-02-16 02:56:01.936242 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.73s 2026-02-16 02:56:01.936253 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.60s 2026-02-16 02:56:01.936265 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.48s 2026-02-16 02:56:01.936276 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.95s 2026-02-16 02:56:01.936287 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.64s 2026-02-16 02:56:01.936298 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.48s 2026-02-16 02:56:01.936309 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.23s 2026-02-16 02:56:01.936321 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.28s 2026-02-16 02:56:01.936331 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.75s 2026-02-16 02:56:01.936342 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.70s 2026-02-16 02:56:01.936353 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.55s 2026-02-16 02:56:02.196030 | orchestrator | + osism apply fail2ban 2026-02-16 02:56:14.673777 | orchestrator | 2026-02-16 02:56:14 | INFO  | Task 5c3923ea-327b-435e-bdab-9bcc8194ac39 (fail2ban) was prepared for execution. 2026-02-16 02:56:14.673919 | orchestrator | 2026-02-16 02:56:14 | INFO  | It takes a moment until task 5c3923ea-327b-435e-bdab-9bcc8194ac39 (fail2ban) has been started and output is visible here. 2026-02-16 02:56:35.947913 | orchestrator | 2026-02-16 02:56:35.948023 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-16 02:56:35.948040 | orchestrator | 2026-02-16 02:56:35.948052 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-16 02:56:35.948065 | orchestrator | Monday 16 February 2026 02:56:19 +0000 (0:00:00.258) 0:00:00.258 ******* 2026-02-16 02:56:35.948077 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 02:56:35.948091 | orchestrator | 2026-02-16 02:56:35.948103 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-16 02:56:35.948114 | orchestrator | Monday 16 February 2026 02:56:20 +0000 (0:00:01.111) 0:00:01.369 ******* 2026-02-16 02:56:35.948125 | orchestrator | changed: [testbed-manager] 2026-02-16 02:56:35.948137 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:56:35.948148 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:56:35.948160 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:56:35.948171 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:56:35.948182 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:56:35.948193 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:56:35.948205 | orchestrator | 2026-02-16 02:56:35.948216 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-16 02:56:35.948279 | orchestrator | Monday 16 February 2026 02:56:31 +0000 (0:00:10.918) 0:00:12.288 ******* 2026-02-16 02:56:35.948292 | orchestrator | changed: [testbed-manager] 2026-02-16 02:56:35.948303 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:56:35.948313 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:56:35.948324 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:56:35.948335 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:56:35.948345 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:56:35.948356 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:56:35.948367 | orchestrator | 2026-02-16 02:56:35.948378 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-16 02:56:35.948389 | orchestrator | Monday 16 February 2026 02:56:32 +0000 (0:00:01.400) 0:00:13.688 ******* 2026-02-16 02:56:35.948400 | orchestrator | ok: [testbed-manager] 2026-02-16 02:56:35.948412 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:56:35.948422 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:56:35.948433 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:56:35.948444 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:56:35.948454 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:56:35.948465 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:56:35.948476 | orchestrator | 2026-02-16 02:56:35.948487 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-16 02:56:35.948498 | orchestrator | Monday 16 February 2026 02:56:33 +0000 (0:00:01.357) 0:00:15.046 ******* 2026-02-16 02:56:35.948509 | orchestrator | changed: [testbed-manager] 2026-02-16 02:56:35.948520 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:56:35.948531 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:56:35.948541 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:56:35.948552 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:56:35.948563 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:56:35.948574 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:56:35.948584 | orchestrator | 2026-02-16 02:56:35.948595 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 02:56:35.948606 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 02:56:35.948644 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 02:56:35.948657 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 02:56:35.948668 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 02:56:35.948679 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 02:56:35.948690 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 02:56:35.948700 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 02:56:35.948711 | orchestrator | 2026-02-16 02:56:35.948722 | orchestrator | 2026-02-16 02:56:35.948733 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 02:56:35.948743 | orchestrator | Monday 16 February 2026 02:56:35 +0000 (0:00:01.608) 0:00:16.654 ******* 2026-02-16 02:56:35.948754 | orchestrator | =============================================================================== 2026-02-16 02:56:35.948765 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 10.92s 2026-02-16 02:56:35.948775 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.61s 2026-02-16 02:56:35.948785 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.40s 2026-02-16 02:56:35.948796 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.36s 2026-02-16 02:56:35.948807 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.11s 2026-02-16 02:56:36.198963 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-16 02:56:36.199057 | orchestrator | + osism apply network 2026-02-16 02:56:48.146688 | orchestrator | 2026-02-16 02:56:48 | INFO  | Task 8644023e-e3ce-4f38-bf71-0467b393a7c8 (network) was prepared for execution. 2026-02-16 02:56:48.146802 | orchestrator | 2026-02-16 02:56:48 | INFO  | It takes a moment until task 8644023e-e3ce-4f38-bf71-0467b393a7c8 (network) has been started and output is visible here. 2026-02-16 02:57:15.674641 | orchestrator | 2026-02-16 02:57:15.674751 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-16 02:57:15.674769 | orchestrator | 2026-02-16 02:57:15.674782 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-16 02:57:15.674795 | orchestrator | Monday 16 February 2026 02:56:52 +0000 (0:00:00.247) 0:00:00.247 ******* 2026-02-16 02:57:15.674806 | orchestrator | ok: [testbed-manager] 2026-02-16 02:57:15.674818 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:57:15.674830 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:57:15.674841 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:57:15.674851 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:57:15.674862 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:57:15.674873 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:57:15.674883 | orchestrator | 2026-02-16 02:57:15.674894 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-16 02:57:15.674906 | orchestrator | Monday 16 February 2026 02:56:52 +0000 (0:00:00.668) 0:00:00.916 ******* 2026-02-16 02:57:15.674918 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 02:57:15.674932 | orchestrator | 2026-02-16 02:57:15.674943 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-16 02:57:15.674986 | orchestrator | Monday 16 February 2026 02:56:54 +0000 (0:00:01.164) 0:00:02.080 ******* 2026-02-16 02:57:15.675006 | orchestrator | ok: [testbed-manager] 2026-02-16 02:57:15.675034 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:57:15.675054 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:57:15.675071 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:57:15.675088 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:57:15.675106 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:57:15.675122 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:57:15.675140 | orchestrator | 2026-02-16 02:57:15.675158 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-16 02:57:15.675177 | orchestrator | Monday 16 February 2026 02:56:56 +0000 (0:00:02.025) 0:00:04.106 ******* 2026-02-16 02:57:15.675197 | orchestrator | ok: [testbed-manager] 2026-02-16 02:57:15.675216 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:57:15.675235 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:57:15.675253 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:57:15.675272 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:57:15.675291 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:57:15.675356 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:57:15.675376 | orchestrator | 2026-02-16 02:57:15.675396 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-16 02:57:15.675417 | orchestrator | Monday 16 February 2026 02:56:58 +0000 (0:00:01.864) 0:00:05.970 ******* 2026-02-16 02:57:15.675435 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-16 02:57:15.675456 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-16 02:57:15.675476 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-16 02:57:15.675496 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-16 02:57:15.675517 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-16 02:57:15.675537 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-16 02:57:15.675556 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-16 02:57:15.675575 | orchestrator | 2026-02-16 02:57:15.675617 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-16 02:57:15.675646 | orchestrator | Monday 16 February 2026 02:56:58 +0000 (0:00:00.920) 0:00:06.891 ******* 2026-02-16 02:57:15.675666 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 02:57:15.675686 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-16 02:57:15.675705 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-16 02:57:15.675722 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-16 02:57:15.675741 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-16 02:57:15.675761 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-16 02:57:15.675780 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-16 02:57:15.675799 | orchestrator | 2026-02-16 02:57:15.675819 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-16 02:57:15.675837 | orchestrator | Monday 16 February 2026 02:57:02 +0000 (0:00:03.178) 0:00:10.069 ******* 2026-02-16 02:57:15.675856 | orchestrator | changed: [testbed-manager] 2026-02-16 02:57:15.675876 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:57:15.675895 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:57:15.675914 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:57:15.675933 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:57:15.675951 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:57:15.675970 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:57:15.675989 | orchestrator | 2026-02-16 02:57:15.676009 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-16 02:57:15.676027 | orchestrator | Monday 16 February 2026 02:57:03 +0000 (0:00:01.586) 0:00:11.655 ******* 2026-02-16 02:57:15.676045 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-16 02:57:15.676065 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 02:57:15.676084 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-16 02:57:15.676103 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-16 02:57:15.676139 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-16 02:57:15.676157 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-16 02:57:15.676176 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-16 02:57:15.676194 | orchestrator | 2026-02-16 02:57:15.676214 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-16 02:57:15.676233 | orchestrator | Monday 16 February 2026 02:57:05 +0000 (0:00:01.600) 0:00:13.256 ******* 2026-02-16 02:57:15.676253 | orchestrator | ok: [testbed-manager] 2026-02-16 02:57:15.676266 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:57:15.676276 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:57:15.676287 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:57:15.676357 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:57:15.676379 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:57:15.676395 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:57:15.676413 | orchestrator | 2026-02-16 02:57:15.676430 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-16 02:57:15.676470 | orchestrator | Monday 16 February 2026 02:57:06 +0000 (0:00:01.073) 0:00:14.329 ******* 2026-02-16 02:57:15.676490 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:57:15.676508 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:57:15.676528 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:57:15.676546 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:57:15.676566 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:57:15.676578 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:57:15.676588 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:57:15.676599 | orchestrator | 2026-02-16 02:57:15.676610 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-16 02:57:15.676621 | orchestrator | Monday 16 February 2026 02:57:07 +0000 (0:00:00.649) 0:00:14.979 ******* 2026-02-16 02:57:15.676631 | orchestrator | ok: [testbed-manager] 2026-02-16 02:57:15.676642 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:57:15.676653 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:57:15.676663 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:57:15.676674 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:57:15.676685 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:57:15.676695 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:57:15.676706 | orchestrator | 2026-02-16 02:57:15.676717 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-16 02:57:15.676727 | orchestrator | Monday 16 February 2026 02:57:09 +0000 (0:00:02.094) 0:00:17.073 ******* 2026-02-16 02:57:15.676738 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:57:15.676749 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:57:15.676759 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:57:15.676770 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:57:15.676780 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:57:15.676791 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:57:15.676803 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-16 02:57:15.676815 | orchestrator | 2026-02-16 02:57:15.676826 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-16 02:57:15.676837 | orchestrator | Monday 16 February 2026 02:57:10 +0000 (0:00:00.882) 0:00:17.955 ******* 2026-02-16 02:57:15.676847 | orchestrator | ok: [testbed-manager] 2026-02-16 02:57:15.676858 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:57:15.676869 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:57:15.676879 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:57:15.676890 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:57:15.676900 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:57:15.676911 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:57:15.676922 | orchestrator | 2026-02-16 02:57:15.676933 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-16 02:57:15.676943 | orchestrator | Monday 16 February 2026 02:57:11 +0000 (0:00:01.615) 0:00:19.571 ******* 2026-02-16 02:57:15.676955 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 02:57:15.676979 | orchestrator | 2026-02-16 02:57:15.676990 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-16 02:57:15.677001 | orchestrator | Monday 16 February 2026 02:57:12 +0000 (0:00:01.166) 0:00:20.738 ******* 2026-02-16 02:57:15.677011 | orchestrator | ok: [testbed-manager] 2026-02-16 02:57:15.677022 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:57:15.677033 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:57:15.677044 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:57:15.677061 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:57:15.677072 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:57:15.677083 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:57:15.677093 | orchestrator | 2026-02-16 02:57:15.677104 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-16 02:57:15.677115 | orchestrator | Monday 16 February 2026 02:57:13 +0000 (0:00:01.084) 0:00:21.823 ******* 2026-02-16 02:57:15.677126 | orchestrator | ok: [testbed-manager] 2026-02-16 02:57:15.677137 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:57:15.677147 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:57:15.677158 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:57:15.677168 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:57:15.677179 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:57:15.677189 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:57:15.677200 | orchestrator | 2026-02-16 02:57:15.677211 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-16 02:57:15.677222 | orchestrator | Monday 16 February 2026 02:57:14 +0000 (0:00:00.620) 0:00:22.443 ******* 2026-02-16 02:57:15.677232 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-16 02:57:15.677243 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-16 02:57:15.677254 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-16 02:57:15.677265 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-16 02:57:15.677276 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-16 02:57:15.677287 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-16 02:57:15.677348 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-16 02:57:15.677361 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-16 02:57:15.677372 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-16 02:57:15.677382 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-16 02:57:15.677393 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-16 02:57:15.677404 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-16 02:57:15.677414 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-16 02:57:15.677425 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-16 02:57:15.677436 | orchestrator | 2026-02-16 02:57:15.677457 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-16 02:57:30.434838 | orchestrator | Monday 16 February 2026 02:57:15 +0000 (0:00:01.159) 0:00:23.603 ******* 2026-02-16 02:57:30.434934 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:57:30.434946 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:57:30.434955 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:57:30.434964 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:57:30.434973 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:57:30.434982 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:57:30.434991 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:57:30.434999 | orchestrator | 2026-02-16 02:57:30.435033 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-16 02:57:30.435043 | orchestrator | Monday 16 February 2026 02:57:16 +0000 (0:00:00.585) 0:00:24.188 ******* 2026-02-16 02:57:30.435053 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2026-02-16 02:57:30.435064 | orchestrator | 2026-02-16 02:57:30.435073 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-16 02:57:30.435082 | orchestrator | Monday 16 February 2026 02:57:20 +0000 (0:00:04.071) 0:00:28.259 ******* 2026-02-16 02:57:30.435092 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-16 02:57:30.435103 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-16 02:57:30.435112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-16 02:57:30.435121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-16 02:57:30.435130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-16 02:57:30.435151 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-16 02:57:30.435160 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-16 02:57:30.435174 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-16 02:57:30.435189 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-16 02:57:30.435212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-16 02:57:30.435227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-16 02:57:30.435261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-16 02:57:30.435288 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-16 02:57:30.435304 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-16 02:57:30.435318 | orchestrator | 2026-02-16 02:57:30.435417 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-16 02:57:30.435435 | orchestrator | Monday 16 February 2026 02:57:25 +0000 (0:00:04.981) 0:00:33.240 ******* 2026-02-16 02:57:30.435451 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-16 02:57:30.435469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-16 02:57:30.435485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-16 02:57:30.435502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-16 02:57:30.435518 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-16 02:57:30.435549 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-16 02:57:30.435561 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-16 02:57:30.435572 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-16 02:57:30.435583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-16 02:57:30.435593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-16 02:57:30.435604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-16 02:57:30.435623 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-16 02:57:30.435646 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-16 02:57:35.692615 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-16 02:57:35.692731 | orchestrator | 2026-02-16 02:57:35.692750 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-16 02:57:35.692763 | orchestrator | Monday 16 February 2026 02:57:30 +0000 (0:00:05.121) 0:00:38.362 ******* 2026-02-16 02:57:35.692777 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 02:57:35.692789 | orchestrator | 2026-02-16 02:57:35.692800 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-16 02:57:35.692811 | orchestrator | Monday 16 February 2026 02:57:31 +0000 (0:00:01.081) 0:00:39.443 ******* 2026-02-16 02:57:35.692823 | orchestrator | ok: [testbed-manager] 2026-02-16 02:57:35.692835 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:57:35.692846 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:57:35.692857 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:57:35.692867 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:57:35.692878 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:57:35.692889 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:57:35.692900 | orchestrator | 2026-02-16 02:57:35.692911 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-16 02:57:35.692922 | orchestrator | Monday 16 February 2026 02:57:32 +0000 (0:00:01.010) 0:00:40.454 ******* 2026-02-16 02:57:35.692933 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-16 02:57:35.692944 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-16 02:57:35.692955 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-16 02:57:35.692966 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-16 02:57:35.692977 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-16 02:57:35.692988 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-16 02:57:35.692999 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-16 02:57:35.693010 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-16 02:57:35.693021 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:57:35.693032 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-16 02:57:35.693043 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-16 02:57:35.693072 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-16 02:57:35.693084 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:57:35.693097 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-16 02:57:35.693142 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-16 02:57:35.693162 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-16 02:57:35.693180 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-16 02:57:35.693198 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-16 02:57:35.693216 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:57:35.693236 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-16 02:57:35.693249 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-16 02:57:35.693262 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-16 02:57:35.693272 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-16 02:57:35.693283 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:57:35.693294 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-16 02:57:35.693304 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-16 02:57:35.693315 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-16 02:57:35.693325 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-16 02:57:35.693402 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:57:35.693414 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:57:35.693425 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-16 02:57:35.693435 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-16 02:57:35.693446 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-16 02:57:35.693457 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-16 02:57:35.693467 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:57:35.693478 | orchestrator | 2026-02-16 02:57:35.693489 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-16 02:57:35.693520 | orchestrator | Monday 16 February 2026 02:57:34 +0000 (0:00:01.634) 0:00:42.088 ******* 2026-02-16 02:57:35.693531 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:57:35.693542 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:57:35.693553 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:57:35.693564 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:57:35.693575 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:57:35.693585 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:57:35.693596 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:57:35.693607 | orchestrator | 2026-02-16 02:57:35.693618 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-16 02:57:35.693629 | orchestrator | Monday 16 February 2026 02:57:34 +0000 (0:00:00.535) 0:00:42.624 ******* 2026-02-16 02:57:35.693640 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:57:35.693651 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:57:35.693662 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:57:35.693673 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:57:35.693684 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:57:35.693694 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:57:35.693705 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:57:35.693716 | orchestrator | 2026-02-16 02:57:35.693727 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 02:57:35.693740 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-16 02:57:35.693752 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 02:57:35.693774 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 02:57:35.693785 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 02:57:35.693796 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 02:57:35.693807 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 02:57:35.693818 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 02:57:35.693829 | orchestrator | 2026-02-16 02:57:35.693840 | orchestrator | 2026-02-16 02:57:35.693851 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 02:57:35.693862 | orchestrator | Monday 16 February 2026 02:57:35 +0000 (0:00:00.672) 0:00:43.297 ******* 2026-02-16 02:57:35.693882 | orchestrator | =============================================================================== 2026-02-16 02:57:35.693893 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.12s 2026-02-16 02:57:35.693904 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.98s 2026-02-16 02:57:35.693915 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.07s 2026-02-16 02:57:35.693925 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.18s 2026-02-16 02:57:35.693936 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.09s 2026-02-16 02:57:35.693947 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.03s 2026-02-16 02:57:35.693958 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.86s 2026-02-16 02:57:35.693968 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.63s 2026-02-16 02:57:35.693979 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.62s 2026-02-16 02:57:35.693990 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.60s 2026-02-16 02:57:35.694001 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.59s 2026-02-16 02:57:35.694069 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.17s 2026-02-16 02:57:35.694085 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.16s 2026-02-16 02:57:35.694099 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.16s 2026-02-16 02:57:35.694118 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.08s 2026-02-16 02:57:35.694136 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.08s 2026-02-16 02:57:35.694154 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.07s 2026-02-16 02:57:35.694173 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.01s 2026-02-16 02:57:35.694192 | orchestrator | osism.commons.network : Create required directories --------------------- 0.92s 2026-02-16 02:57:35.694211 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.88s 2026-02-16 02:57:35.971325 | orchestrator | + osism apply wireguard 2026-02-16 02:57:47.973735 | orchestrator | 2026-02-16 02:57:47 | INFO  | Task 54c583bf-95c9-439f-b894-c564a98db67c (wireguard) was prepared for execution. 2026-02-16 02:57:47.973840 | orchestrator | 2026-02-16 02:57:47 | INFO  | It takes a moment until task 54c583bf-95c9-439f-b894-c564a98db67c (wireguard) has been started and output is visible here. 2026-02-16 02:58:04.915355 | orchestrator | 2026-02-16 02:58:04.915550 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-16 02:58:04.915569 | orchestrator | 2026-02-16 02:58:04.915582 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-16 02:58:04.915593 | orchestrator | Monday 16 February 2026 02:57:51 +0000 (0:00:00.162) 0:00:00.162 ******* 2026-02-16 02:58:04.915605 | orchestrator | ok: [testbed-manager] 2026-02-16 02:58:04.915617 | orchestrator | 2026-02-16 02:58:04.915629 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-16 02:58:04.915640 | orchestrator | Monday 16 February 2026 02:57:52 +0000 (0:00:01.088) 0:00:01.250 ******* 2026-02-16 02:58:04.915651 | orchestrator | changed: [testbed-manager] 2026-02-16 02:58:04.915667 | orchestrator | 2026-02-16 02:58:04.915678 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-16 02:58:04.915689 | orchestrator | Monday 16 February 2026 02:57:57 +0000 (0:00:05.000) 0:00:06.250 ******* 2026-02-16 02:58:04.915700 | orchestrator | changed: [testbed-manager] 2026-02-16 02:58:04.915711 | orchestrator | 2026-02-16 02:58:04.915722 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-16 02:58:04.915733 | orchestrator | Monday 16 February 2026 02:57:58 +0000 (0:00:00.471) 0:00:06.722 ******* 2026-02-16 02:58:04.915743 | orchestrator | changed: [testbed-manager] 2026-02-16 02:58:04.915754 | orchestrator | 2026-02-16 02:58:04.915765 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-16 02:58:04.915776 | orchestrator | Monday 16 February 2026 02:57:58 +0000 (0:00:00.377) 0:00:07.100 ******* 2026-02-16 02:58:04.915787 | orchestrator | ok: [testbed-manager] 2026-02-16 02:58:04.915798 | orchestrator | 2026-02-16 02:58:04.915809 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-16 02:58:04.915820 | orchestrator | Monday 16 February 2026 02:57:59 +0000 (0:00:00.641) 0:00:07.741 ******* 2026-02-16 02:58:04.915830 | orchestrator | ok: [testbed-manager] 2026-02-16 02:58:04.915854 | orchestrator | 2026-02-16 02:58:04.915865 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-16 02:58:04.915876 | orchestrator | Monday 16 February 2026 02:57:59 +0000 (0:00:00.416) 0:00:08.158 ******* 2026-02-16 02:58:04.915887 | orchestrator | ok: [testbed-manager] 2026-02-16 02:58:04.915898 | orchestrator | 2026-02-16 02:58:04.915911 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-16 02:58:04.915924 | orchestrator | Monday 16 February 2026 02:58:00 +0000 (0:00:00.390) 0:00:08.548 ******* 2026-02-16 02:58:04.915936 | orchestrator | changed: [testbed-manager] 2026-02-16 02:58:04.915949 | orchestrator | 2026-02-16 02:58:04.915962 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-16 02:58:04.915974 | orchestrator | Monday 16 February 2026 02:58:01 +0000 (0:00:01.095) 0:00:09.644 ******* 2026-02-16 02:58:04.915987 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-16 02:58:04.915999 | orchestrator | changed: [testbed-manager] 2026-02-16 02:58:04.916012 | orchestrator | 2026-02-16 02:58:04.916025 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-16 02:58:04.916038 | orchestrator | Monday 16 February 2026 02:58:02 +0000 (0:00:00.882) 0:00:10.526 ******* 2026-02-16 02:58:04.916051 | orchestrator | changed: [testbed-manager] 2026-02-16 02:58:04.916064 | orchestrator | 2026-02-16 02:58:04.916077 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-16 02:58:04.916089 | orchestrator | Monday 16 February 2026 02:58:03 +0000 (0:00:01.588) 0:00:12.114 ******* 2026-02-16 02:58:04.916102 | orchestrator | changed: [testbed-manager] 2026-02-16 02:58:04.916114 | orchestrator | 2026-02-16 02:58:04.916127 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 02:58:04.916140 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 02:58:04.916153 | orchestrator | 2026-02-16 02:58:04.916165 | orchestrator | 2026-02-16 02:58:04.916176 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 02:58:04.916195 | orchestrator | Monday 16 February 2026 02:58:04 +0000 (0:00:00.903) 0:00:13.017 ******* 2026-02-16 02:58:04.916206 | orchestrator | =============================================================================== 2026-02-16 02:58:04.916217 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.00s 2026-02-16 02:58:04.916228 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.59s 2026-02-16 02:58:04.916239 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.10s 2026-02-16 02:58:04.916250 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.09s 2026-02-16 02:58:04.916261 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.90s 2026-02-16 02:58:04.916272 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.88s 2026-02-16 02:58:04.916283 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.64s 2026-02-16 02:58:04.916294 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.47s 2026-02-16 02:58:04.916423 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.42s 2026-02-16 02:58:04.916439 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.39s 2026-02-16 02:58:04.916450 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.38s 2026-02-16 02:58:05.175992 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-16 02:58:05.209153 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-16 02:58:05.209244 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-16 02:58:05.287716 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 190 0 --:--:-- --:--:-- --:--:-- 192 2026-02-16 02:58:05.299182 | orchestrator | + osism apply --environment custom workarounds 2026-02-16 02:58:07.169449 | orchestrator | 2026-02-16 02:58:07 | INFO  | Trying to run play workarounds in environment custom 2026-02-16 02:58:17.301323 | orchestrator | 2026-02-16 02:58:17 | INFO  | Task 70217c22-cefd-46d9-a52f-77bc47d85700 (workarounds) was prepared for execution. 2026-02-16 02:58:17.301485 | orchestrator | 2026-02-16 02:58:17 | INFO  | It takes a moment until task 70217c22-cefd-46d9-a52f-77bc47d85700 (workarounds) has been started and output is visible here. 2026-02-16 02:58:41.555744 | orchestrator | 2026-02-16 02:58:41.555897 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 02:58:41.555927 | orchestrator | 2026-02-16 02:58:41.555947 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-16 02:58:41.555967 | orchestrator | Monday 16 February 2026 02:58:21 +0000 (0:00:00.112) 0:00:00.112 ******* 2026-02-16 02:58:41.555985 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-16 02:58:41.556004 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-16 02:58:41.556022 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-16 02:58:41.556040 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-16 02:58:41.556059 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-16 02:58:41.556078 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-16 02:58:41.556098 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-16 02:58:41.556118 | orchestrator | 2026-02-16 02:58:41.556137 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-16 02:58:41.556157 | orchestrator | 2026-02-16 02:58:41.556177 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-16 02:58:41.556197 | orchestrator | Monday 16 February 2026 02:58:21 +0000 (0:00:00.664) 0:00:00.777 ******* 2026-02-16 02:58:41.556215 | orchestrator | ok: [testbed-manager] 2026-02-16 02:58:41.556269 | orchestrator | 2026-02-16 02:58:41.556292 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-16 02:58:41.556312 | orchestrator | 2026-02-16 02:58:41.556331 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-16 02:58:41.556351 | orchestrator | Monday 16 February 2026 02:58:23 +0000 (0:00:02.052) 0:00:02.829 ******* 2026-02-16 02:58:41.556371 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:58:41.556392 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:58:41.556411 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:58:41.556431 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:58:41.556483 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:58:41.556504 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:58:41.556522 | orchestrator | 2026-02-16 02:58:41.556542 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-16 02:58:41.556563 | orchestrator | 2026-02-16 02:58:41.556584 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-16 02:58:41.556622 | orchestrator | Monday 16 February 2026 02:58:25 +0000 (0:00:01.779) 0:00:04.609 ******* 2026-02-16 02:58:41.556644 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-16 02:58:41.556664 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-16 02:58:41.556683 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-16 02:58:41.556702 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-16 02:58:41.556720 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-16 02:58:41.556737 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-16 02:58:41.556757 | orchestrator | 2026-02-16 02:58:41.556777 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-16 02:58:41.556795 | orchestrator | Monday 16 February 2026 02:58:27 +0000 (0:00:01.453) 0:00:06.063 ******* 2026-02-16 02:58:41.556812 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:58:41.556831 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:58:41.556848 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:58:41.556866 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:58:41.556885 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:58:41.556904 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:58:41.556922 | orchestrator | 2026-02-16 02:58:41.556942 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-16 02:58:41.556954 | orchestrator | Monday 16 February 2026 02:58:30 +0000 (0:00:03.918) 0:00:09.981 ******* 2026-02-16 02:58:41.556968 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:58:41.556988 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:58:41.557006 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:58:41.557024 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:58:41.557040 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:58:41.557058 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:58:41.557076 | orchestrator | 2026-02-16 02:58:41.557095 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-16 02:58:41.557115 | orchestrator | 2026-02-16 02:58:41.557134 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-16 02:58:41.557152 | orchestrator | Monday 16 February 2026 02:58:31 +0000 (0:00:00.651) 0:00:10.632 ******* 2026-02-16 02:58:41.557171 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:58:41.557182 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:58:41.557193 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:58:41.557204 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:58:41.557214 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:58:41.557225 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:58:41.557253 | orchestrator | changed: [testbed-manager] 2026-02-16 02:58:41.557264 | orchestrator | 2026-02-16 02:58:41.557275 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-16 02:58:41.557285 | orchestrator | Monday 16 February 2026 02:58:33 +0000 (0:00:01.528) 0:00:12.161 ******* 2026-02-16 02:58:41.557296 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:58:41.557307 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:58:41.557317 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:58:41.557328 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:58:41.557339 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:58:41.557349 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:58:41.557385 | orchestrator | changed: [testbed-manager] 2026-02-16 02:58:41.557397 | orchestrator | 2026-02-16 02:58:41.557408 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-16 02:58:41.557419 | orchestrator | Monday 16 February 2026 02:58:34 +0000 (0:00:01.503) 0:00:13.664 ******* 2026-02-16 02:58:41.557429 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:58:41.557440 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:58:41.557526 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:58:41.557539 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:58:41.557550 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:58:41.557560 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:58:41.557571 | orchestrator | ok: [testbed-manager] 2026-02-16 02:58:41.557582 | orchestrator | 2026-02-16 02:58:41.557593 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-16 02:58:41.557604 | orchestrator | Monday 16 February 2026 02:58:36 +0000 (0:00:01.439) 0:00:15.104 ******* 2026-02-16 02:58:41.557615 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:58:41.557626 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:58:41.557636 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:58:41.557647 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:58:41.557658 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:58:41.557669 | orchestrator | changed: [testbed-manager] 2026-02-16 02:58:41.557679 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:58:41.557690 | orchestrator | 2026-02-16 02:58:41.557701 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-16 02:58:41.557712 | orchestrator | Monday 16 February 2026 02:58:38 +0000 (0:00:02.200) 0:00:17.305 ******* 2026-02-16 02:58:41.557723 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:58:41.557734 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:58:41.557745 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:58:41.557755 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:58:41.557766 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:58:41.557777 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:58:41.557788 | orchestrator | skipping: [testbed-manager] 2026-02-16 02:58:41.557798 | orchestrator | 2026-02-16 02:58:41.557809 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-16 02:58:41.557820 | orchestrator | 2026-02-16 02:58:41.557831 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-16 02:58:41.557842 | orchestrator | Monday 16 February 2026 02:58:38 +0000 (0:00:00.555) 0:00:17.860 ******* 2026-02-16 02:58:41.557853 | orchestrator | ok: [testbed-manager] 2026-02-16 02:58:41.557864 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:58:41.557874 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:58:41.557885 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:58:41.557896 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:58:41.557916 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:58:41.557927 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:58:41.557938 | orchestrator | 2026-02-16 02:58:41.557949 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 02:58:41.557961 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 02:58:41.557974 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 02:58:41.557994 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 02:58:41.558005 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 02:58:41.558066 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 02:58:41.558081 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 02:58:41.558092 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 02:58:41.558103 | orchestrator | 2026-02-16 02:58:41.558115 | orchestrator | 2026-02-16 02:58:41.558125 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 02:58:41.558136 | orchestrator | Monday 16 February 2026 02:58:41 +0000 (0:00:02.692) 0:00:20.553 ******* 2026-02-16 02:58:41.558147 | orchestrator | =============================================================================== 2026-02-16 02:58:41.558158 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.92s 2026-02-16 02:58:41.558169 | orchestrator | Install python3-docker -------------------------------------------------- 2.69s 2026-02-16 02:58:41.558180 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.20s 2026-02-16 02:58:41.558191 | orchestrator | Apply netplan configuration --------------------------------------------- 2.05s 2026-02-16 02:58:41.558201 | orchestrator | Apply netplan configuration --------------------------------------------- 1.78s 2026-02-16 02:58:41.558212 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.53s 2026-02-16 02:58:41.558223 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.50s 2026-02-16 02:58:41.558234 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.45s 2026-02-16 02:58:41.558244 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.44s 2026-02-16 02:58:41.558255 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.66s 2026-02-16 02:58:41.558266 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.65s 2026-02-16 02:58:41.558287 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.56s 2026-02-16 02:58:42.141069 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-16 02:58:54.170298 | orchestrator | 2026-02-16 02:58:54 | INFO  | Task 5e6b90b1-36ec-403b-af82-d03b5d2c738d (reboot) was prepared for execution. 2026-02-16 02:58:54.170432 | orchestrator | 2026-02-16 02:58:54 | INFO  | It takes a moment until task 5e6b90b1-36ec-403b-af82-d03b5d2c738d (reboot) has been started and output is visible here. 2026-02-16 02:59:04.035146 | orchestrator | 2026-02-16 02:59:04.035246 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-16 02:59:04.035263 | orchestrator | 2026-02-16 02:59:04.035276 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-16 02:59:04.035288 | orchestrator | Monday 16 February 2026 02:58:58 +0000 (0:00:00.201) 0:00:00.201 ******* 2026-02-16 02:59:04.035299 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:59:04.035311 | orchestrator | 2026-02-16 02:59:04.035323 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-16 02:59:04.035334 | orchestrator | Monday 16 February 2026 02:58:58 +0000 (0:00:00.100) 0:00:00.301 ******* 2026-02-16 02:59:04.035346 | orchestrator | changed: [testbed-node-0] 2026-02-16 02:59:04.035357 | orchestrator | 2026-02-16 02:59:04.035368 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-16 02:59:04.035401 | orchestrator | Monday 16 February 2026 02:58:59 +0000 (0:00:00.886) 0:00:01.187 ******* 2026-02-16 02:59:04.035413 | orchestrator | skipping: [testbed-node-0] 2026-02-16 02:59:04.035424 | orchestrator | 2026-02-16 02:59:04.035436 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-16 02:59:04.035447 | orchestrator | 2026-02-16 02:59:04.035458 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-16 02:59:04.035469 | orchestrator | Monday 16 February 2026 02:58:59 +0000 (0:00:00.099) 0:00:01.287 ******* 2026-02-16 02:59:04.035481 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:59:04.035527 | orchestrator | 2026-02-16 02:59:04.035539 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-16 02:59:04.035550 | orchestrator | Monday 16 February 2026 02:58:59 +0000 (0:00:00.090) 0:00:01.377 ******* 2026-02-16 02:59:04.035561 | orchestrator | changed: [testbed-node-1] 2026-02-16 02:59:04.035572 | orchestrator | 2026-02-16 02:59:04.035583 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-16 02:59:04.035606 | orchestrator | Monday 16 February 2026 02:59:00 +0000 (0:00:00.640) 0:00:02.018 ******* 2026-02-16 02:59:04.035617 | orchestrator | skipping: [testbed-node-1] 2026-02-16 02:59:04.035628 | orchestrator | 2026-02-16 02:59:04.035640 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-16 02:59:04.035660 | orchestrator | 2026-02-16 02:59:04.035679 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-16 02:59:04.035698 | orchestrator | Monday 16 February 2026 02:59:00 +0000 (0:00:00.109) 0:00:02.127 ******* 2026-02-16 02:59:04.035717 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:59:04.035737 | orchestrator | 2026-02-16 02:59:04.035757 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-16 02:59:04.035774 | orchestrator | Monday 16 February 2026 02:59:00 +0000 (0:00:00.181) 0:00:02.309 ******* 2026-02-16 02:59:04.035787 | orchestrator | changed: [testbed-node-2] 2026-02-16 02:59:04.035800 | orchestrator | 2026-02-16 02:59:04.035814 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-16 02:59:04.035826 | orchestrator | Monday 16 February 2026 02:59:01 +0000 (0:00:00.660) 0:00:02.970 ******* 2026-02-16 02:59:04.035839 | orchestrator | skipping: [testbed-node-2] 2026-02-16 02:59:04.035852 | orchestrator | 2026-02-16 02:59:04.035865 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-16 02:59:04.035878 | orchestrator | 2026-02-16 02:59:04.035890 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-16 02:59:04.035903 | orchestrator | Monday 16 February 2026 02:59:01 +0000 (0:00:00.129) 0:00:03.100 ******* 2026-02-16 02:59:04.035917 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:59:04.035930 | orchestrator | 2026-02-16 02:59:04.035944 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-16 02:59:04.035957 | orchestrator | Monday 16 February 2026 02:59:01 +0000 (0:00:00.094) 0:00:03.195 ******* 2026-02-16 02:59:04.035970 | orchestrator | changed: [testbed-node-3] 2026-02-16 02:59:04.035982 | orchestrator | 2026-02-16 02:59:04.035995 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-16 02:59:04.036007 | orchestrator | Monday 16 February 2026 02:59:02 +0000 (0:00:00.661) 0:00:03.856 ******* 2026-02-16 02:59:04.036021 | orchestrator | skipping: [testbed-node-3] 2026-02-16 02:59:04.036032 | orchestrator | 2026-02-16 02:59:04.036043 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-16 02:59:04.036054 | orchestrator | 2026-02-16 02:59:04.036066 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-16 02:59:04.036086 | orchestrator | Monday 16 February 2026 02:59:02 +0000 (0:00:00.102) 0:00:03.959 ******* 2026-02-16 02:59:04.036106 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:59:04.036125 | orchestrator | 2026-02-16 02:59:04.036141 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-16 02:59:04.036161 | orchestrator | Monday 16 February 2026 02:59:02 +0000 (0:00:00.090) 0:00:04.049 ******* 2026-02-16 02:59:04.036172 | orchestrator | changed: [testbed-node-4] 2026-02-16 02:59:04.036184 | orchestrator | 2026-02-16 02:59:04.036195 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-16 02:59:04.036206 | orchestrator | Monday 16 February 2026 02:59:02 +0000 (0:00:00.651) 0:00:04.700 ******* 2026-02-16 02:59:04.036218 | orchestrator | skipping: [testbed-node-4] 2026-02-16 02:59:04.036237 | orchestrator | 2026-02-16 02:59:04.036256 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-16 02:59:04.036274 | orchestrator | 2026-02-16 02:59:04.036292 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-16 02:59:04.036312 | orchestrator | Monday 16 February 2026 02:59:02 +0000 (0:00:00.104) 0:00:04.805 ******* 2026-02-16 02:59:04.036330 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:59:04.036347 | orchestrator | 2026-02-16 02:59:04.036359 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-16 02:59:04.036370 | orchestrator | Monday 16 February 2026 02:59:03 +0000 (0:00:00.091) 0:00:04.896 ******* 2026-02-16 02:59:04.036381 | orchestrator | changed: [testbed-node-5] 2026-02-16 02:59:04.036392 | orchestrator | 2026-02-16 02:59:04.036403 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-16 02:59:04.036414 | orchestrator | Monday 16 February 2026 02:59:03 +0000 (0:00:00.649) 0:00:05.546 ******* 2026-02-16 02:59:04.036441 | orchestrator | skipping: [testbed-node-5] 2026-02-16 02:59:04.036452 | orchestrator | 2026-02-16 02:59:04.036463 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 02:59:04.036475 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 02:59:04.036506 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 02:59:04.036517 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 02:59:04.036528 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 02:59:04.036539 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 02:59:04.036550 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 02:59:04.036561 | orchestrator | 2026-02-16 02:59:04.036572 | orchestrator | 2026-02-16 02:59:04.036583 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 02:59:04.036594 | orchestrator | Monday 16 February 2026 02:59:03 +0000 (0:00:00.033) 0:00:05.579 ******* 2026-02-16 02:59:04.036612 | orchestrator | =============================================================================== 2026-02-16 02:59:04.036623 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.15s 2026-02-16 02:59:04.036634 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.65s 2026-02-16 02:59:04.036645 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.58s 2026-02-16 02:59:04.314116 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-16 02:59:16.278265 | orchestrator | 2026-02-16 02:59:16 | INFO  | Task 6ffa3366-2318-4d65-b2fe-26feddfd7ce1 (wait-for-connection) was prepared for execution. 2026-02-16 02:59:16.278358 | orchestrator | 2026-02-16 02:59:16 | INFO  | It takes a moment until task 6ffa3366-2318-4d65-b2fe-26feddfd7ce1 (wait-for-connection) has been started and output is visible here. 2026-02-16 02:59:32.365397 | orchestrator | 2026-02-16 02:59:32.365597 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-16 02:59:32.365619 | orchestrator | 2026-02-16 02:59:32.365632 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-16 02:59:32.365644 | orchestrator | Monday 16 February 2026 02:59:20 +0000 (0:00:00.242) 0:00:00.242 ******* 2026-02-16 02:59:32.365656 | orchestrator | ok: [testbed-node-1] 2026-02-16 02:59:32.365668 | orchestrator | ok: [testbed-node-0] 2026-02-16 02:59:32.365679 | orchestrator | ok: [testbed-node-2] 2026-02-16 02:59:32.365690 | orchestrator | ok: [testbed-node-3] 2026-02-16 02:59:32.365701 | orchestrator | ok: [testbed-node-4] 2026-02-16 02:59:32.365711 | orchestrator | ok: [testbed-node-5] 2026-02-16 02:59:32.365722 | orchestrator | 2026-02-16 02:59:32.365733 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 02:59:32.365745 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 02:59:32.365758 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 02:59:32.365769 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 02:59:32.365780 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 02:59:32.365791 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 02:59:32.365802 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 02:59:32.365814 | orchestrator | 2026-02-16 02:59:32.365825 | orchestrator | 2026-02-16 02:59:32.365836 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 02:59:32.365847 | orchestrator | Monday 16 February 2026 02:59:32 +0000 (0:00:11.522) 0:00:11.765 ******* 2026-02-16 02:59:32.365858 | orchestrator | =============================================================================== 2026-02-16 02:59:32.365869 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.52s 2026-02-16 02:59:32.618453 | orchestrator | + osism apply hddtemp 2026-02-16 02:59:44.645099 | orchestrator | 2026-02-16 02:59:44 | INFO  | Task d3c5dac9-364f-43b5-8fec-160506025cef (hddtemp) was prepared for execution. 2026-02-16 02:59:44.645209 | orchestrator | 2026-02-16 02:59:44 | INFO  | It takes a moment until task d3c5dac9-364f-43b5-8fec-160506025cef (hddtemp) has been started and output is visible here. 2026-02-16 03:00:11.710843 | orchestrator | 2026-02-16 03:00:11.710920 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-16 03:00:11.710927 | orchestrator | 2026-02-16 03:00:11.710932 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-16 03:00:11.710937 | orchestrator | Monday 16 February 2026 02:59:48 +0000 (0:00:00.186) 0:00:00.186 ******* 2026-02-16 03:00:11.710941 | orchestrator | ok: [testbed-manager] 2026-02-16 03:00:11.710947 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:00:11.710951 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:00:11.710955 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:00:11.710959 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:00:11.710963 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:00:11.710967 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:00:11.710970 | orchestrator | 2026-02-16 03:00:11.710974 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-16 03:00:11.710978 | orchestrator | Monday 16 February 2026 02:59:49 +0000 (0:00:00.487) 0:00:00.673 ******* 2026-02-16 03:00:11.710983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:00:11.711003 | orchestrator | 2026-02-16 03:00:11.711007 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-16 03:00:11.711011 | orchestrator | Monday 16 February 2026 02:59:50 +0000 (0:00:00.968) 0:00:01.641 ******* 2026-02-16 03:00:11.711014 | orchestrator | ok: [testbed-manager] 2026-02-16 03:00:11.711018 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:00:11.711022 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:00:11.711026 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:00:11.711029 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:00:11.711033 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:00:11.711037 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:00:11.711041 | orchestrator | 2026-02-16 03:00:11.711045 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-16 03:00:11.711057 | orchestrator | Monday 16 February 2026 02:59:51 +0000 (0:00:01.725) 0:00:03.367 ******* 2026-02-16 03:00:11.711061 | orchestrator | changed: [testbed-manager] 2026-02-16 03:00:11.711065 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:00:11.711069 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:00:11.711073 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:00:11.711076 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:00:11.711080 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:00:11.711084 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:00:11.711087 | orchestrator | 2026-02-16 03:00:11.711091 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-16 03:00:11.711095 | orchestrator | Monday 16 February 2026 02:59:52 +0000 (0:00:01.016) 0:00:04.383 ******* 2026-02-16 03:00:11.711098 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:00:11.711102 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:00:11.711106 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:00:11.711109 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:00:11.711113 | orchestrator | ok: [testbed-manager] 2026-02-16 03:00:11.711117 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:00:11.711128 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:00:11.711131 | orchestrator | 2026-02-16 03:00:11.711135 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-16 03:00:11.711139 | orchestrator | Monday 16 February 2026 02:59:53 +0000 (0:00:01.075) 0:00:05.459 ******* 2026-02-16 03:00:11.711143 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:00:11.711147 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:00:11.711150 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:00:11.711154 | orchestrator | changed: [testbed-manager] 2026-02-16 03:00:11.711158 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:00:11.711162 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:00:11.711165 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:00:11.711169 | orchestrator | 2026-02-16 03:00:11.711173 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-16 03:00:11.711177 | orchestrator | Monday 16 February 2026 02:59:54 +0000 (0:00:00.760) 0:00:06.219 ******* 2026-02-16 03:00:11.711180 | orchestrator | changed: [testbed-manager] 2026-02-16 03:00:11.711184 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:00:11.711188 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:00:11.711191 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:00:11.711195 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:00:11.711199 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:00:11.711202 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:00:11.711206 | orchestrator | 2026-02-16 03:00:11.711210 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-16 03:00:11.711213 | orchestrator | Monday 16 February 2026 03:00:08 +0000 (0:00:13.604) 0:00:19.824 ******* 2026-02-16 03:00:11.711217 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:00:11.711225 | orchestrator | 2026-02-16 03:00:11.711228 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-16 03:00:11.711232 | orchestrator | Monday 16 February 2026 03:00:09 +0000 (0:00:01.238) 0:00:21.062 ******* 2026-02-16 03:00:11.711236 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:00:11.711240 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:00:11.711244 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:00:11.711247 | orchestrator | changed: [testbed-manager] 2026-02-16 03:00:11.711251 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:00:11.711255 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:00:11.711259 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:00:11.711262 | orchestrator | 2026-02-16 03:00:11.711266 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:00:11.711270 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 03:00:11.711284 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 03:00:11.711289 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 03:00:11.711293 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 03:00:11.711297 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 03:00:11.711300 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 03:00:11.711304 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 03:00:11.711308 | orchestrator | 2026-02-16 03:00:11.711312 | orchestrator | 2026-02-16 03:00:11.711316 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:00:11.711319 | orchestrator | Monday 16 February 2026 03:00:11 +0000 (0:00:01.804) 0:00:22.866 ******* 2026-02-16 03:00:11.711323 | orchestrator | =============================================================================== 2026-02-16 03:00:11.711327 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.60s 2026-02-16 03:00:11.711331 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.80s 2026-02-16 03:00:11.711334 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.73s 2026-02-16 03:00:11.711341 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.24s 2026-02-16 03:00:11.711345 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.08s 2026-02-16 03:00:11.711349 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.02s 2026-02-16 03:00:11.711352 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.97s 2026-02-16 03:00:11.711356 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.76s 2026-02-16 03:00:11.711360 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.49s 2026-02-16 03:00:12.013003 | orchestrator | ++ semver 9.5.0 7.1.1 2026-02-16 03:00:12.061408 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-16 03:00:12.061504 | orchestrator | + sudo systemctl restart manager.service 2026-02-16 03:00:26.024741 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-16 03:00:26.024845 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-16 03:00:26.024861 | orchestrator | + local max_attempts=60 2026-02-16 03:00:26.024874 | orchestrator | + local name=ceph-ansible 2026-02-16 03:00:26.024886 | orchestrator | + local attempt_num=1 2026-02-16 03:00:26.024898 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-16 03:00:26.064298 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-16 03:00:26.064378 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-16 03:00:26.064715 | orchestrator | + sleep 5 2026-02-16 03:00:31.070247 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-16 03:00:31.113918 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-16 03:00:31.114082 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-16 03:00:31.114105 | orchestrator | + sleep 5 2026-02-16 03:00:36.117783 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-16 03:00:36.151896 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-16 03:00:36.151995 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-16 03:00:36.152010 | orchestrator | + sleep 5 2026-02-16 03:00:41.156389 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-16 03:00:41.193828 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-16 03:00:41.193930 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-16 03:00:41.193946 | orchestrator | + sleep 5 2026-02-16 03:00:46.198390 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-16 03:00:46.233362 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-16 03:00:46.233468 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-16 03:00:46.233483 | orchestrator | + sleep 5 2026-02-16 03:00:51.237988 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-16 03:00:51.273855 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-16 03:00:51.273957 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-16 03:00:51.273972 | orchestrator | + sleep 5 2026-02-16 03:00:56.277983 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-16 03:00:56.315657 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-16 03:00:56.315754 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-16 03:00:56.315769 | orchestrator | + sleep 5 2026-02-16 03:01:01.321385 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-16 03:01:01.356540 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-16 03:01:01.356721 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-16 03:01:01.356767 | orchestrator | + sleep 5 2026-02-16 03:01:06.361131 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-16 03:01:06.395053 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-16 03:01:06.501872 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-16 03:01:06.501944 | orchestrator | + sleep 5 2026-02-16 03:01:11.398356 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-16 03:01:11.429725 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-16 03:01:11.429820 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-16 03:01:11.429843 | orchestrator | + sleep 5 2026-02-16 03:01:16.434281 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-16 03:01:16.471768 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-16 03:01:16.471861 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-16 03:01:16.471876 | orchestrator | + sleep 5 2026-02-16 03:01:21.477012 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-16 03:01:21.515425 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-16 03:01:21.515527 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-16 03:01:21.515542 | orchestrator | + sleep 5 2026-02-16 03:01:26.520528 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-16 03:01:26.551456 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-16 03:01:26.551574 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-16 03:01:26.551600 | orchestrator | + sleep 5 2026-02-16 03:01:31.557246 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-16 03:01:31.594645 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-16 03:01:31.594764 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-16 03:01:31.594781 | orchestrator | + local max_attempts=60 2026-02-16 03:01:31.594795 | orchestrator | + local name=kolla-ansible 2026-02-16 03:01:31.594807 | orchestrator | + local attempt_num=1 2026-02-16 03:01:31.594829 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-16 03:01:31.623072 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-16 03:01:31.623164 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-16 03:01:31.623207 | orchestrator | + local max_attempts=60 2026-02-16 03:01:31.623304 | orchestrator | + local name=osism-ansible 2026-02-16 03:01:31.623321 | orchestrator | + local attempt_num=1 2026-02-16 03:01:31.623344 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-16 03:01:31.654786 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-16 03:01:31.654874 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-16 03:01:31.654889 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-16 03:01:31.798962 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-16 03:01:31.931737 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-16 03:01:32.052215 | orchestrator | ARA in osism-ansible already disabled. 2026-02-16 03:01:32.188356 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-16 03:01:32.189892 | orchestrator | + osism apply gather-facts 2026-02-16 03:01:44.257426 | orchestrator | 2026-02-16 03:01:44 | INFO  | Task 0301e05f-e2c0-4b0c-a184-7ceca59d7927 (gather-facts) was prepared for execution. 2026-02-16 03:01:44.257537 | orchestrator | 2026-02-16 03:01:44 | INFO  | It takes a moment until task 0301e05f-e2c0-4b0c-a184-7ceca59d7927 (gather-facts) has been started and output is visible here. 2026-02-16 03:01:56.850954 | orchestrator | 2026-02-16 03:01:56.851095 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-16 03:01:56.851122 | orchestrator | 2026-02-16 03:01:56.851141 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-16 03:01:56.851161 | orchestrator | Monday 16 February 2026 03:01:47 +0000 (0:00:00.157) 0:00:00.157 ******* 2026-02-16 03:01:56.851181 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:01:56.851200 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:01:56.851217 | orchestrator | ok: [testbed-manager] 2026-02-16 03:01:56.851234 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:01:56.851252 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:01:56.851271 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:01:56.851288 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:01:56.851307 | orchestrator | 2026-02-16 03:01:56.851326 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-16 03:01:56.851343 | orchestrator | 2026-02-16 03:01:56.851361 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-16 03:01:56.851379 | orchestrator | Monday 16 February 2026 03:01:55 +0000 (0:00:08.206) 0:00:08.364 ******* 2026-02-16 03:01:56.851398 | orchestrator | skipping: [testbed-manager] 2026-02-16 03:01:56.851417 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:01:56.851436 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:01:56.851455 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:01:56.851472 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:01:56.851489 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:01:56.851506 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:01:56.851524 | orchestrator | 2026-02-16 03:01:56.851541 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:01:56.851560 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 03:01:56.851579 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 03:01:56.851595 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 03:01:56.851614 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 03:01:56.851636 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 03:01:56.851653 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 03:01:56.851704 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 03:01:56.851797 | orchestrator | 2026-02-16 03:01:56.851817 | orchestrator | 2026-02-16 03:01:56.851833 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:01:56.851852 | orchestrator | Monday 16 February 2026 03:01:56 +0000 (0:00:00.502) 0:00:08.867 ******* 2026-02-16 03:01:56.851868 | orchestrator | =============================================================================== 2026-02-16 03:01:56.851885 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.21s 2026-02-16 03:01:56.851901 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-02-16 03:01:57.165251 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-16 03:01:57.175343 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-16 03:01:57.192905 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-16 03:01:57.202933 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-16 03:01:57.216000 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-16 03:01:57.226408 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-16 03:01:57.237223 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-16 03:01:57.252803 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-16 03:01:57.262248 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-16 03:01:57.271348 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-16 03:01:57.280690 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-16 03:01:57.298340 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-16 03:01:57.308221 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-16 03:01:57.321852 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-16 03:01:57.337748 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-16 03:01:57.354968 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-16 03:01:57.368640 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-16 03:01:57.379811 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-16 03:01:57.393167 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-16 03:01:57.405536 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-16 03:01:57.415537 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-16 03:01:57.429411 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-16 03:01:57.449610 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-16 03:01:57.464878 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-16 03:01:57.612124 | orchestrator | ok: Runtime: 0:23:43.919532 2026-02-16 03:01:57.785091 | 2026-02-16 03:01:57.785187 | TASK [Deploy services] 2026-02-16 03:01:58.518314 | orchestrator | 2026-02-16 03:01:58.518668 | orchestrator | # DEPLOY SERVICES 2026-02-16 03:01:58.518702 | orchestrator | 2026-02-16 03:01:58.518718 | orchestrator | + set -e 2026-02-16 03:01:58.518793 | orchestrator | + echo 2026-02-16 03:01:58.518809 | orchestrator | + echo '# DEPLOY SERVICES' 2026-02-16 03:01:58.518824 | orchestrator | + echo 2026-02-16 03:01:58.518868 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-16 03:01:58.518892 | orchestrator | ++ export INTERACTIVE=false 2026-02-16 03:01:58.518907 | orchestrator | ++ INTERACTIVE=false 2026-02-16 03:01:58.518920 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-16 03:01:58.518941 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-16 03:01:58.518953 | orchestrator | + source /opt/manager-vars.sh 2026-02-16 03:01:58.518969 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-16 03:01:58.518989 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-16 03:01:58.519030 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-16 03:01:58.519096 | orchestrator | ++ CEPH_VERSION=reef 2026-02-16 03:01:58.519125 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-16 03:01:58.519146 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-16 03:01:58.519168 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-16 03:01:58.519185 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-16 03:01:58.519203 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-16 03:01:58.519222 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-16 03:01:58.519240 | orchestrator | ++ export ARA=false 2026-02-16 03:01:58.519259 | orchestrator | ++ ARA=false 2026-02-16 03:01:58.519279 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-16 03:01:58.519297 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-16 03:01:58.519315 | orchestrator | ++ export TEMPEST=false 2026-02-16 03:01:58.519335 | orchestrator | ++ TEMPEST=false 2026-02-16 03:01:58.519353 | orchestrator | ++ export IS_ZUUL=true 2026-02-16 03:01:58.519373 | orchestrator | ++ IS_ZUUL=true 2026-02-16 03:01:58.519385 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 03:01:58.519397 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 03:01:58.519408 | orchestrator | ++ export EXTERNAL_API=false 2026-02-16 03:01:58.519419 | orchestrator | ++ EXTERNAL_API=false 2026-02-16 03:01:58.519431 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-16 03:01:58.519442 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-16 03:01:58.519453 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-16 03:01:58.519464 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-16 03:01:58.519475 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-16 03:01:58.519495 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-16 03:01:58.519507 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-16 03:01:58.528806 | orchestrator | 2026-02-16 03:01:58.528870 | orchestrator | # PULL IMAGES 2026-02-16 03:01:58.528877 | orchestrator | 2026-02-16 03:01:58.528882 | orchestrator | + set -e 2026-02-16 03:01:58.528886 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-16 03:01:58.528892 | orchestrator | ++ export INTERACTIVE=false 2026-02-16 03:01:58.528896 | orchestrator | ++ INTERACTIVE=false 2026-02-16 03:01:58.528901 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-16 03:01:58.528904 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-16 03:01:58.528908 | orchestrator | + source /opt/manager-vars.sh 2026-02-16 03:01:58.528912 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-16 03:01:58.528916 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-16 03:01:58.528920 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-16 03:01:58.528924 | orchestrator | ++ CEPH_VERSION=reef 2026-02-16 03:01:58.528927 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-16 03:01:58.528931 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-16 03:01:58.528935 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-16 03:01:58.528939 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-16 03:01:58.528943 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-16 03:01:58.528947 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-16 03:01:58.528951 | orchestrator | ++ export ARA=false 2026-02-16 03:01:58.528955 | orchestrator | ++ ARA=false 2026-02-16 03:01:58.528961 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-16 03:01:58.528965 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-16 03:01:58.528968 | orchestrator | ++ export TEMPEST=false 2026-02-16 03:01:58.528972 | orchestrator | ++ TEMPEST=false 2026-02-16 03:01:58.528976 | orchestrator | ++ export IS_ZUUL=true 2026-02-16 03:01:58.528980 | orchestrator | ++ IS_ZUUL=true 2026-02-16 03:01:58.528983 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 03:01:58.528987 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 03:01:58.528991 | orchestrator | ++ export EXTERNAL_API=false 2026-02-16 03:01:58.528995 | orchestrator | ++ EXTERNAL_API=false 2026-02-16 03:01:58.528998 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-16 03:01:58.529002 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-16 03:01:58.529023 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-16 03:01:58.529027 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-16 03:01:58.529031 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-16 03:01:58.529035 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-16 03:01:58.529038 | orchestrator | + echo 2026-02-16 03:01:58.529042 | orchestrator | + echo '# PULL IMAGES' 2026-02-16 03:01:58.529046 | orchestrator | + echo 2026-02-16 03:01:58.529690 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-16 03:01:58.588997 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-16 03:01:58.589102 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-16 03:02:00.488755 | orchestrator | 2026-02-16 03:02:00 | INFO  | Trying to run play pull-images in environment custom 2026-02-16 03:02:10.568666 | orchestrator | 2026-02-16 03:02:10 | INFO  | Task b35e60d8-7fed-4c9e-a2e6-2bdb113a106c (pull-images) was prepared for execution. 2026-02-16 03:02:10.568816 | orchestrator | 2026-02-16 03:02:10 | INFO  | Task b35e60d8-7fed-4c9e-a2e6-2bdb113a106c is running in background. No more output. Check ARA for logs. 2026-02-16 03:02:10.858691 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-02-16 03:02:22.854542 | orchestrator | 2026-02-16 03:02:22 | INFO  | Task 11baf231-dff7-4c18-b241-d85d2061f71f (cgit) was prepared for execution. 2026-02-16 03:02:22.854670 | orchestrator | 2026-02-16 03:02:22 | INFO  | Task 11baf231-dff7-4c18-b241-d85d2061f71f is running in background. No more output. Check ARA for logs. 2026-02-16 03:02:35.185701 | orchestrator | 2026-02-16 03:02:35 | INFO  | Task 1b3823af-5fa5-4a0d-97bf-406bb8f4e016 (dotfiles) was prepared for execution. 2026-02-16 03:02:35.185888 | orchestrator | 2026-02-16 03:02:35 | INFO  | Task 1b3823af-5fa5-4a0d-97bf-406bb8f4e016 is running in background. No more output. Check ARA for logs. 2026-02-16 03:02:47.842308 | orchestrator | 2026-02-16 03:02:47 | INFO  | Task b74941f2-4329-41a1-b2b5-d83f0436eb7c (homer) was prepared for execution. 2026-02-16 03:02:47.842416 | orchestrator | 2026-02-16 03:02:47 | INFO  | Task b74941f2-4329-41a1-b2b5-d83f0436eb7c is running in background. No more output. Check ARA for logs. 2026-02-16 03:03:00.290269 | orchestrator | 2026-02-16 03:03:00 | INFO  | Task e30b903b-06f0-4839-ac0a-907f27c521d9 (phpmyadmin) was prepared for execution. 2026-02-16 03:03:00.290361 | orchestrator | 2026-02-16 03:03:00 | INFO  | Task e30b903b-06f0-4839-ac0a-907f27c521d9 is running in background. No more output. Check ARA for logs. 2026-02-16 03:03:12.545973 | orchestrator | 2026-02-16 03:03:12 | INFO  | Task f5360c57-ff21-48c1-a28b-2f3ac240d8a2 (sosreport) was prepared for execution. 2026-02-16 03:03:12.546083 | orchestrator | 2026-02-16 03:03:12 | INFO  | Task f5360c57-ff21-48c1-a28b-2f3ac240d8a2 is running in background. No more output. Check ARA for logs. 2026-02-16 03:03:12.821663 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-02-16 03:03:12.828765 | orchestrator | + set -e 2026-02-16 03:03:12.828900 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-16 03:03:12.828928 | orchestrator | ++ export INTERACTIVE=false 2026-02-16 03:03:12.828949 | orchestrator | ++ INTERACTIVE=false 2026-02-16 03:03:12.828970 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-16 03:03:12.828989 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-16 03:03:12.829007 | orchestrator | + source /opt/manager-vars.sh 2026-02-16 03:03:12.829026 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-16 03:03:12.829045 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-16 03:03:12.829061 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-16 03:03:12.829072 | orchestrator | ++ CEPH_VERSION=reef 2026-02-16 03:03:12.829084 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-16 03:03:12.829095 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-16 03:03:12.829106 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-16 03:03:12.829117 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-16 03:03:12.829128 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-16 03:03:12.829139 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-16 03:03:12.829151 | orchestrator | ++ export ARA=false 2026-02-16 03:03:12.829162 | orchestrator | ++ ARA=false 2026-02-16 03:03:12.829173 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-16 03:03:12.829217 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-16 03:03:12.829229 | orchestrator | ++ export TEMPEST=false 2026-02-16 03:03:12.829240 | orchestrator | ++ TEMPEST=false 2026-02-16 03:03:12.829251 | orchestrator | ++ export IS_ZUUL=true 2026-02-16 03:03:12.829261 | orchestrator | ++ IS_ZUUL=true 2026-02-16 03:03:12.829288 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 03:03:12.829305 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 03:03:12.829317 | orchestrator | ++ export EXTERNAL_API=false 2026-02-16 03:03:12.829327 | orchestrator | ++ EXTERNAL_API=false 2026-02-16 03:03:12.829338 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-16 03:03:12.829349 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-16 03:03:12.829360 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-16 03:03:12.829371 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-16 03:03:12.829382 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-16 03:03:12.829392 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-16 03:03:12.829403 | orchestrator | ++ semver 9.5.0 8.0.3 2026-02-16 03:03:12.858463 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-16 03:03:12.858553 | orchestrator | + osism apply frr 2026-02-16 03:03:24.954503 | orchestrator | 2026-02-16 03:03:24 | INFO  | Task aa9e818b-1893-4f21-acec-223cfece2d34 (frr) was prepared for execution. 2026-02-16 03:03:24.954645 | orchestrator | 2026-02-16 03:03:24 | INFO  | It takes a moment until task aa9e818b-1893-4f21-acec-223cfece2d34 (frr) has been started and output is visible here. 2026-02-16 03:03:52.159149 | orchestrator | 2026-02-16 03:03:52.159253 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-16 03:03:52.159268 | orchestrator | 2026-02-16 03:03:52.159277 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-16 03:03:52.159292 | orchestrator | Monday 16 February 2026 03:03:30 +0000 (0:00:00.177) 0:00:00.177 ******* 2026-02-16 03:03:52.159301 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-16 03:03:52.159311 | orchestrator | 2026-02-16 03:03:52.159320 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-16 03:03:52.159328 | orchestrator | Monday 16 February 2026 03:03:30 +0000 (0:00:00.298) 0:00:00.475 ******* 2026-02-16 03:03:52.159336 | orchestrator | changed: [testbed-manager] 2026-02-16 03:03:52.159346 | orchestrator | 2026-02-16 03:03:52.159355 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-16 03:03:52.159366 | orchestrator | Monday 16 February 2026 03:03:31 +0000 (0:00:01.471) 0:00:01.947 ******* 2026-02-16 03:03:52.159374 | orchestrator | changed: [testbed-manager] 2026-02-16 03:03:52.159382 | orchestrator | 2026-02-16 03:03:52.159390 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-16 03:03:52.159399 | orchestrator | Monday 16 February 2026 03:03:41 +0000 (0:00:09.975) 0:00:11.923 ******* 2026-02-16 03:03:52.159407 | orchestrator | ok: [testbed-manager] 2026-02-16 03:03:52.159416 | orchestrator | 2026-02-16 03:03:52.159424 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-16 03:03:52.159432 | orchestrator | Monday 16 February 2026 03:03:42 +0000 (0:00:01.056) 0:00:12.980 ******* 2026-02-16 03:03:52.159440 | orchestrator | changed: [testbed-manager] 2026-02-16 03:03:52.159449 | orchestrator | 2026-02-16 03:03:52.159456 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-16 03:03:52.159465 | orchestrator | Monday 16 February 2026 03:03:45 +0000 (0:00:02.268) 0:00:15.248 ******* 2026-02-16 03:03:52.159473 | orchestrator | ok: [testbed-manager] 2026-02-16 03:03:52.159481 | orchestrator | 2026-02-16 03:03:52.159489 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-16 03:03:52.159499 | orchestrator | Monday 16 February 2026 03:03:46 +0000 (0:00:01.131) 0:00:16.380 ******* 2026-02-16 03:03:52.159507 | orchestrator | skipping: [testbed-manager] 2026-02-16 03:03:52.159514 | orchestrator | 2026-02-16 03:03:52.159522 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-16 03:03:52.159531 | orchestrator | Monday 16 February 2026 03:03:46 +0000 (0:00:00.187) 0:00:16.568 ******* 2026-02-16 03:03:52.159558 | orchestrator | skipping: [testbed-manager] 2026-02-16 03:03:52.159567 | orchestrator | 2026-02-16 03:03:52.159575 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-16 03:03:52.159582 | orchestrator | Monday 16 February 2026 03:03:46 +0000 (0:00:00.137) 0:00:16.706 ******* 2026-02-16 03:03:52.159590 | orchestrator | changed: [testbed-manager] 2026-02-16 03:03:52.159598 | orchestrator | 2026-02-16 03:03:52.159605 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-16 03:03:52.159613 | orchestrator | Monday 16 February 2026 03:03:47 +0000 (0:00:00.999) 0:00:17.706 ******* 2026-02-16 03:03:52.159621 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-16 03:03:52.159629 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-16 03:03:52.159639 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-16 03:03:52.159647 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-16 03:03:52.159655 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-16 03:03:52.159663 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-16 03:03:52.159671 | orchestrator | 2026-02-16 03:03:52.159679 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-16 03:03:52.159687 | orchestrator | Monday 16 February 2026 03:03:49 +0000 (0:00:01.693) 0:00:19.399 ******* 2026-02-16 03:03:52.159696 | orchestrator | ok: [testbed-manager] 2026-02-16 03:03:52.159704 | orchestrator | 2026-02-16 03:03:52.159712 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-16 03:03:52.159720 | orchestrator | Monday 16 February 2026 03:03:50 +0000 (0:00:01.433) 0:00:20.832 ******* 2026-02-16 03:03:52.159728 | orchestrator | changed: [testbed-manager] 2026-02-16 03:03:52.159737 | orchestrator | 2026-02-16 03:03:52.159745 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:03:52.159754 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:03:52.159762 | orchestrator | 2026-02-16 03:03:52.159770 | orchestrator | 2026-02-16 03:03:52.159783 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:03:52.159792 | orchestrator | Monday 16 February 2026 03:03:51 +0000 (0:00:01.265) 0:00:22.097 ******* 2026-02-16 03:03:52.159800 | orchestrator | =============================================================================== 2026-02-16 03:03:52.159808 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.98s 2026-02-16 03:03:52.159817 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 2.27s 2026-02-16 03:03:52.159825 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 1.69s 2026-02-16 03:03:52.159833 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.47s 2026-02-16 03:03:52.159841 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.43s 2026-02-16 03:03:52.159866 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.27s 2026-02-16 03:03:52.159876 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.13s 2026-02-16 03:03:52.159884 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.06s 2026-02-16 03:03:52.159893 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.00s 2026-02-16 03:03:52.159901 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.30s 2026-02-16 03:03:52.159933 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.19s 2026-02-16 03:03:52.159940 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.14s 2026-02-16 03:03:52.359548 | orchestrator | + osism apply kubernetes 2026-02-16 03:03:54.189533 | orchestrator | 2026-02-16 03:03:54 | INFO  | Task 4c7c8094-f6e2-46e8-aa30-0164b01fe8b6 (kubernetes) was prepared for execution. 2026-02-16 03:03:54.189637 | orchestrator | 2026-02-16 03:03:54 | INFO  | It takes a moment until task 4c7c8094-f6e2-46e8-aa30-0164b01fe8b6 (kubernetes) has been started and output is visible here. 2026-02-16 03:04:23.795027 | orchestrator | 2026-02-16 03:04:23.795126 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-16 03:04:23.795141 | orchestrator | 2026-02-16 03:04:23.795149 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-16 03:04:23.795158 | orchestrator | Monday 16 February 2026 03:03:58 +0000 (0:00:00.208) 0:00:00.208 ******* 2026-02-16 03:04:23.795165 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:04:23.795173 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:04:23.795180 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:04:23.795188 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:04:23.795195 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:04:23.795203 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:04:23.795210 | orchestrator | 2026-02-16 03:04:23.795218 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-16 03:04:23.795224 | orchestrator | Monday 16 February 2026 03:03:59 +0000 (0:00:00.652) 0:00:00.861 ******* 2026-02-16 03:04:23.795231 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:04:23.795238 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:04:23.795245 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:04:23.795251 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:04:23.795258 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:04:23.795264 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:04:23.795270 | orchestrator | 2026-02-16 03:04:23.795276 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-16 03:04:23.795284 | orchestrator | Monday 16 February 2026 03:03:59 +0000 (0:00:00.521) 0:00:01.382 ******* 2026-02-16 03:04:23.795291 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:04:23.795297 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:04:23.795303 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:04:23.795309 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:04:23.795315 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:04:23.795323 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:04:23.795330 | orchestrator | 2026-02-16 03:04:23.795338 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-16 03:04:23.795345 | orchestrator | Monday 16 February 2026 03:04:00 +0000 (0:00:00.656) 0:00:02.039 ******* 2026-02-16 03:04:23.795353 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:04:23.795360 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:04:23.795368 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:04:23.795379 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:04:23.795387 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:04:23.795394 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:04:23.795400 | orchestrator | 2026-02-16 03:04:23.795407 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-16 03:04:23.795415 | orchestrator | Monday 16 February 2026 03:04:01 +0000 (0:00:01.337) 0:00:03.376 ******* 2026-02-16 03:04:23.795423 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:04:23.795430 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:04:23.795438 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:04:23.795445 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:04:23.795453 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:04:23.795460 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:04:23.795468 | orchestrator | 2026-02-16 03:04:23.795476 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-16 03:04:23.795483 | orchestrator | Monday 16 February 2026 03:04:03 +0000 (0:00:01.644) 0:00:05.020 ******* 2026-02-16 03:04:23.795491 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:04:23.795519 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:04:23.795527 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:04:23.795535 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:04:23.795542 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:04:23.795550 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:04:23.795557 | orchestrator | 2026-02-16 03:04:23.795573 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-16 03:04:23.795581 | orchestrator | Monday 16 February 2026 03:04:04 +0000 (0:00:01.119) 0:00:06.140 ******* 2026-02-16 03:04:23.795589 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:04:23.795597 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:04:23.795604 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:04:23.795612 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:04:23.795619 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:04:23.795626 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:04:23.795634 | orchestrator | 2026-02-16 03:04:23.795641 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-16 03:04:23.795647 | orchestrator | Monday 16 February 2026 03:04:05 +0000 (0:00:00.740) 0:00:06.881 ******* 2026-02-16 03:04:23.795654 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:04:23.795660 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:04:23.795667 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:04:23.795673 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:04:23.795679 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:04:23.795685 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:04:23.795691 | orchestrator | 2026-02-16 03:04:23.795697 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-16 03:04:23.795703 | orchestrator | Monday 16 February 2026 03:04:06 +0000 (0:00:01.347) 0:00:08.228 ******* 2026-02-16 03:04:23.795710 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 03:04:23.795716 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 03:04:23.795723 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:04:23.795731 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 03:04:23.795737 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 03:04:23.795744 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:04:23.795750 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 03:04:23.795756 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 03:04:23.795762 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:04:23.795769 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 03:04:23.795792 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 03:04:23.795800 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:04:23.795807 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 03:04:23.795814 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 03:04:23.795820 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:04:23.795827 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 03:04:23.795834 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 03:04:23.795840 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:04:23.795847 | orchestrator | 2026-02-16 03:04:23.795854 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-16 03:04:23.795860 | orchestrator | Monday 16 February 2026 03:04:07 +0000 (0:00:00.974) 0:00:09.203 ******* 2026-02-16 03:04:23.795867 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:04:23.795873 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:04:23.795880 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:04:23.795894 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:04:23.795901 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:04:23.795908 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:04:23.795915 | orchestrator | 2026-02-16 03:04:23.795923 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-16 03:04:23.795931 | orchestrator | Monday 16 February 2026 03:04:08 +0000 (0:00:01.090) 0:00:10.294 ******* 2026-02-16 03:04:23.795938 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:04:23.795944 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:04:23.795974 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:04:23.795982 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:04:23.795989 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:04:23.795995 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:04:23.796001 | orchestrator | 2026-02-16 03:04:23.796007 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-16 03:04:23.796014 | orchestrator | Monday 16 February 2026 03:04:09 +0000 (0:00:00.647) 0:00:10.942 ******* 2026-02-16 03:04:23.796020 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:04:23.796026 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:04:23.796032 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:04:23.796039 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:04:23.796045 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:04:23.796053 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": false, "dest": "/usr/local/bin/k3s", "elapsed": 10, "msg": "Connection failure: The read operation timed out", "url": "https://github.com/k3s-io/k3s/releases/download/v1.34.1+k3s1/k3s"} 2026-02-16 03:04:23.796068 | orchestrator | 2026-02-16 03:04:23.796079 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-16 03:04:23.796086 | orchestrator | Monday 16 February 2026 03:04:20 +0000 (0:00:11.402) 0:00:22.344 ******* 2026-02-16 03:04:23.796092 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:04:23.796098 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:04:23.796105 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:04:23.796111 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:04:23.796118 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:04:23.796125 | orchestrator | 2026-02-16 03:04:23.796132 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-16 03:04:23.796139 | orchestrator | Monday 16 February 2026 03:04:21 +0000 (0:00:00.691) 0:00:23.036 ******* 2026-02-16 03:04:23.796146 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:04:23.796153 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:04:23.796160 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:04:23.796166 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:04:23.796173 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:04:23.796179 | orchestrator | 2026-02-16 03:04:23.796185 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-16 03:04:23.796193 | orchestrator | Monday 16 February 2026 03:04:22 +0000 (0:00:00.979) 0:00:24.015 ******* 2026-02-16 03:04:23.796199 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:04:23.796205 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:04:23.796211 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:04:23.796217 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:04:23.796224 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:04:23.796230 | orchestrator | 2026-02-16 03:04:23.796236 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-16 03:04:23.796242 | orchestrator | Monday 16 February 2026 03:04:22 +0000 (0:00:00.446) 0:00:24.462 ******* 2026-02-16 03:04:23.796249 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-16 03:04:23.796258 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-16 03:04:23.796264 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:04:23.796270 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-16 03:04:23.796284 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-16 03:04:23.796290 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:04:23.796296 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-16 03:04:23.796302 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-16 03:04:23.796308 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:04:23.796314 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-16 03:04:23.796320 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-16 03:04:23.796326 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:04:23.796333 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-16 03:04:23.796339 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-16 03:04:23.796345 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:04:23.796351 | orchestrator | 2026-02-16 03:04:23.796357 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-16 03:04:23.796364 | orchestrator | Monday 16 February 2026 03:04:23 +0000 (0:00:00.466) 0:00:24.928 ******* 2026-02-16 03:04:23.796382 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:05:37.267423 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:05:37.267545 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:05:37.267570 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:05:37.267590 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:05:37.267609 | orchestrator | 2026-02-16 03:05:37.267630 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-16 03:05:37.267650 | orchestrator | Monday 16 February 2026 03:04:24 +0000 (0:00:00.635) 0:00:25.564 ******* 2026-02-16 03:05:37.267670 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:05:37.267689 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:05:37.267709 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:05:37.267725 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:05:37.267743 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:05:37.267761 | orchestrator | 2026-02-16 03:05:37.267780 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-16 03:05:37.267799 | orchestrator | 2026-02-16 03:05:37.267817 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-16 03:05:37.267838 | orchestrator | Monday 16 February 2026 03:04:25 +0000 (0:00:00.950) 0:00:26.515 ******* 2026-02-16 03:05:37.267857 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:05:37.267877 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:05:37.267896 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:05:37.267914 | orchestrator | 2026-02-16 03:05:37.267932 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-16 03:05:37.267952 | orchestrator | Monday 16 February 2026 03:04:25 +0000 (0:00:00.808) 0:00:27.323 ******* 2026-02-16 03:05:37.267967 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:05:37.267981 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:05:37.267993 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:05:37.268005 | orchestrator | 2026-02-16 03:05:37.268018 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-16 03:05:37.268032 | orchestrator | Monday 16 February 2026 03:04:26 +0000 (0:00:01.101) 0:00:28.425 ******* 2026-02-16 03:05:37.268045 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:05:37.268058 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:05:37.268104 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:05:37.268119 | orchestrator | 2026-02-16 03:05:37.268132 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-16 03:05:37.268144 | orchestrator | Monday 16 February 2026 03:04:28 +0000 (0:00:01.683) 0:00:30.109 ******* 2026-02-16 03:05:37.268156 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:05:37.268169 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:05:37.268181 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:05:37.268193 | orchestrator | 2026-02-16 03:05:37.268206 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-16 03:05:37.268247 | orchestrator | Monday 16 February 2026 03:04:29 +0000 (0:00:00.880) 0:00:30.989 ******* 2026-02-16 03:05:37.268260 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:05:37.268272 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:05:37.268286 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:05:37.268299 | orchestrator | 2026-02-16 03:05:37.268311 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-16 03:05:37.268330 | orchestrator | Monday 16 February 2026 03:04:29 +0000 (0:00:00.284) 0:00:31.274 ******* 2026-02-16 03:05:37.268358 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:05:37.268380 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:05:37.268399 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:05:37.268417 | orchestrator | 2026-02-16 03:05:37.268436 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-16 03:05:37.268477 | orchestrator | Monday 16 February 2026 03:04:30 +0000 (0:00:00.663) 0:00:31.938 ******* 2026-02-16 03:05:37.268490 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:05:37.268501 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:05:37.268512 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:05:37.268523 | orchestrator | 2026-02-16 03:05:37.268534 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-16 03:05:37.268545 | orchestrator | Monday 16 February 2026 03:04:31 +0000 (0:00:01.335) 0:00:33.273 ******* 2026-02-16 03:05:37.268556 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:05:37.268567 | orchestrator | 2026-02-16 03:05:37.268579 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-16 03:05:37.268590 | orchestrator | Monday 16 February 2026 03:04:32 +0000 (0:00:00.666) 0:00:33.940 ******* 2026-02-16 03:05:37.268601 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:05:37.268611 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:05:37.268622 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:05:37.268633 | orchestrator | 2026-02-16 03:05:37.268644 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-16 03:05:37.268655 | orchestrator | Monday 16 February 2026 03:04:34 +0000 (0:00:01.758) 0:00:35.698 ******* 2026-02-16 03:05:37.268666 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:05:37.268677 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:05:37.268687 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:05:37.268698 | orchestrator | 2026-02-16 03:05:37.268709 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-16 03:05:37.268720 | orchestrator | Monday 16 February 2026 03:04:34 +0000 (0:00:00.563) 0:00:36.261 ******* 2026-02-16 03:05:37.268731 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:05:37.268742 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:05:37.268759 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:05:37.268786 | orchestrator | 2026-02-16 03:05:37.268809 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-16 03:05:37.268827 | orchestrator | Monday 16 February 2026 03:04:35 +0000 (0:00:01.044) 0:00:37.306 ******* 2026-02-16 03:05:37.268843 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:05:37.268860 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:05:37.268877 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:05:37.268895 | orchestrator | 2026-02-16 03:05:37.268914 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-16 03:05:37.268932 | orchestrator | Monday 16 February 2026 03:04:37 +0000 (0:00:01.441) 0:00:38.747 ******* 2026-02-16 03:05:37.268949 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:05:37.268965 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:05:37.269008 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:05:37.269025 | orchestrator | 2026-02-16 03:05:37.269042 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-16 03:05:37.269058 | orchestrator | Monday 16 February 2026 03:04:37 +0000 (0:00:00.262) 0:00:39.010 ******* 2026-02-16 03:05:37.269121 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:05:37.269140 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:05:37.269158 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:05:37.269174 | orchestrator | 2026-02-16 03:05:37.269190 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-16 03:05:37.269207 | orchestrator | Monday 16 February 2026 03:04:37 +0000 (0:00:00.243) 0:00:39.253 ******* 2026-02-16 03:05:37.269225 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:05:37.269241 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:05:37.269258 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:05:37.269274 | orchestrator | 2026-02-16 03:05:37.269292 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-16 03:05:37.269308 | orchestrator | Monday 16 February 2026 03:04:38 +0000 (0:00:01.103) 0:00:40.357 ******* 2026-02-16 03:05:37.269346 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:05:37.269368 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:05:37.269400 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:05:37.269416 | orchestrator | 2026-02-16 03:05:37.269449 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-16 03:05:37.269469 | orchestrator | Monday 16 February 2026 03:04:42 +0000 (0:00:03.183) 0:00:43.540 ******* 2026-02-16 03:05:37.269485 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:05:37.269502 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:05:37.269521 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:05:37.269538 | orchestrator | 2026-02-16 03:05:37.269556 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-16 03:05:37.269576 | orchestrator | Monday 16 February 2026 03:04:42 +0000 (0:00:00.294) 0:00:43.834 ******* 2026-02-16 03:05:37.269594 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-16 03:05:37.269615 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-16 03:05:37.269626 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-16 03:05:37.269637 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-16 03:05:37.269649 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-16 03:05:37.269665 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-16 03:05:37.269676 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-16 03:05:37.269687 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-16 03:05:37.269698 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-16 03:05:37.269709 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-16 03:05:37.269720 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-16 03:05:37.269731 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-16 03:05:37.269741 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-16 03:05:37.269752 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-16 03:05:37.269774 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-16 03:05:37.269786 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:05:37.269797 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:05:37.269808 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:05:37.269818 | orchestrator | 2026-02-16 03:05:37.269829 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-16 03:05:37.269840 | orchestrator | Monday 16 February 2026 03:05:35 +0000 (0:00:53.637) 0:01:37.471 ******* 2026-02-16 03:05:37.269851 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:05:37.269862 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:05:37.269873 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:05:37.269883 | orchestrator | 2026-02-16 03:05:37.269901 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-16 03:05:37.269913 | orchestrator | Monday 16 February 2026 03:05:36 +0000 (0:00:00.315) 0:01:37.787 ******* 2026-02-16 03:05:37.269924 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:05:37.269934 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:05:37.269945 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:05:37.269956 | orchestrator | 2026-02-16 03:05:37.269982 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-16 03:06:18.806679 | orchestrator | Monday 16 February 2026 03:05:37 +0000 (0:00:00.943) 0:01:38.730 ******* 2026-02-16 03:06:18.806770 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:06:18.806781 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:06:18.806788 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:06:18.806795 | orchestrator | 2026-02-16 03:06:18.806802 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-16 03:06:18.806809 | orchestrator | Monday 16 February 2026 03:05:38 +0000 (0:00:01.351) 0:01:40.082 ******* 2026-02-16 03:06:18.806815 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:06:18.806822 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:06:18.806828 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:06:18.806834 | orchestrator | 2026-02-16 03:06:18.806841 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-16 03:06:18.806847 | orchestrator | Monday 16 February 2026 03:06:05 +0000 (0:00:26.512) 0:02:06.594 ******* 2026-02-16 03:06:18.806854 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:06:18.806861 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:06:18.806867 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:06:18.806873 | orchestrator | 2026-02-16 03:06:18.806880 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-16 03:06:18.806886 | orchestrator | Monday 16 February 2026 03:06:05 +0000 (0:00:00.612) 0:02:07.206 ******* 2026-02-16 03:06:18.806892 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:06:18.806899 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:06:18.806905 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:06:18.806911 | orchestrator | 2026-02-16 03:06:18.806917 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-16 03:06:18.806923 | orchestrator | Monday 16 February 2026 03:06:06 +0000 (0:00:00.634) 0:02:07.840 ******* 2026-02-16 03:06:18.806930 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:06:18.806936 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:06:18.806942 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:06:18.806949 | orchestrator | 2026-02-16 03:06:18.806955 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-16 03:06:18.806962 | orchestrator | Monday 16 February 2026 03:06:07 +0000 (0:00:00.820) 0:02:08.661 ******* 2026-02-16 03:06:18.806968 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:06:18.806974 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:06:18.806980 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:06:18.806986 | orchestrator | 2026-02-16 03:06:18.806993 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-16 03:06:18.807018 | orchestrator | Monday 16 February 2026 03:06:07 +0000 (0:00:00.568) 0:02:09.229 ******* 2026-02-16 03:06:18.807024 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:06:18.807031 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:06:18.807037 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:06:18.807043 | orchestrator | 2026-02-16 03:06:18.807049 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-16 03:06:18.807056 | orchestrator | Monday 16 February 2026 03:06:08 +0000 (0:00:00.278) 0:02:09.508 ******* 2026-02-16 03:06:18.807062 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:06:18.807068 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:06:18.807074 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:06:18.807080 | orchestrator | 2026-02-16 03:06:18.807087 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-16 03:06:18.807093 | orchestrator | Monday 16 February 2026 03:06:08 +0000 (0:00:00.595) 0:02:10.103 ******* 2026-02-16 03:06:18.807099 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:06:18.807105 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:06:18.807111 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:06:18.807117 | orchestrator | 2026-02-16 03:06:18.807123 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-16 03:06:18.807130 | orchestrator | Monday 16 February 2026 03:06:09 +0000 (0:00:00.753) 0:02:10.856 ******* 2026-02-16 03:06:18.807167 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:06:18.807175 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:06:18.807181 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:06:18.807187 | orchestrator | 2026-02-16 03:06:18.807193 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-16 03:06:18.807200 | orchestrator | Monday 16 February 2026 03:06:10 +0000 (0:00:00.864) 0:02:11.721 ******* 2026-02-16 03:06:18.807206 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:06:18.807212 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:06:18.807218 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:06:18.807225 | orchestrator | 2026-02-16 03:06:18.807231 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-16 03:06:18.807237 | orchestrator | Monday 16 February 2026 03:06:11 +0000 (0:00:00.798) 0:02:12.519 ******* 2026-02-16 03:06:18.807247 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:06:18.807254 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:06:18.807261 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:06:18.807268 | orchestrator | 2026-02-16 03:06:18.807276 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-16 03:06:18.807283 | orchestrator | Monday 16 February 2026 03:06:11 +0000 (0:00:00.293) 0:02:12.812 ******* 2026-02-16 03:06:18.807290 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:06:18.807297 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:06:18.807305 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:06:18.807312 | orchestrator | 2026-02-16 03:06:18.807319 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-16 03:06:18.807326 | orchestrator | Monday 16 February 2026 03:06:11 +0000 (0:00:00.458) 0:02:13.271 ******* 2026-02-16 03:06:18.807333 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:06:18.807341 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:06:18.807348 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:06:18.807355 | orchestrator | 2026-02-16 03:06:18.807363 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-16 03:06:18.807370 | orchestrator | Monday 16 February 2026 03:06:12 +0000 (0:00:00.595) 0:02:13.867 ******* 2026-02-16 03:06:18.807377 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:06:18.807384 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:06:18.807391 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:06:18.807398 | orchestrator | 2026-02-16 03:06:18.807406 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-16 03:06:18.807437 | orchestrator | Monday 16 February 2026 03:06:13 +0000 (0:00:00.615) 0:02:14.483 ******* 2026-02-16 03:06:18.807445 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-16 03:06:18.807453 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-16 03:06:18.807461 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-16 03:06:18.807468 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-16 03:06:18.807475 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-16 03:06:18.807483 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-16 03:06:18.807490 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-16 03:06:18.807499 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-16 03:06:18.807506 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-16 03:06:18.807513 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-16 03:06:18.807520 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-16 03:06:18.807526 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-16 03:06:18.807533 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-16 03:06:18.807539 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-16 03:06:18.807545 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-16 03:06:18.807551 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-16 03:06:18.807557 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-16 03:06:18.807563 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-16 03:06:18.807569 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-16 03:06:18.807576 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-16 03:06:18.807582 | orchestrator | 2026-02-16 03:06:18.807588 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-16 03:06:18.807595 | orchestrator | 2026-02-16 03:06:18.807601 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-16 03:06:18.807607 | orchestrator | Monday 16 February 2026 03:06:16 +0000 (0:00:03.011) 0:02:17.494 ******* 2026-02-16 03:06:18.807613 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:06:18.807619 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:06:18.807626 | orchestrator | 2026-02-16 03:06:18.807632 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-16 03:06:18.807638 | orchestrator | Monday 16 February 2026 03:06:16 +0000 (0:00:00.419) 0:02:17.914 ******* 2026-02-16 03:06:18.807644 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:06:18.807650 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:06:18.807656 | orchestrator | 2026-02-16 03:06:18.807663 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-16 03:06:18.807682 | orchestrator | Monday 16 February 2026 03:06:16 +0000 (0:00:00.517) 0:02:18.431 ******* 2026-02-16 03:06:18.807691 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:06:18.807698 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:06:18.807704 | orchestrator | 2026-02-16 03:06:18.807710 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-16 03:06:18.807722 | orchestrator | Monday 16 February 2026 03:06:17 +0000 (0:00:00.202) 0:02:18.634 ******* 2026-02-16 03:06:18.807728 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-4, testbed-node-5 2026-02-16 03:06:18.807735 | orchestrator | 2026-02-16 03:06:18.807741 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-16 03:06:18.807747 | orchestrator | Monday 16 February 2026 03:06:17 +0000 (0:00:00.320) 0:02:18.954 ******* 2026-02-16 03:06:18.807753 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:06:18.807760 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:06:18.807766 | orchestrator | 2026-02-16 03:06:18.807772 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-16 03:06:18.807778 | orchestrator | Monday 16 February 2026 03:06:17 +0000 (0:00:00.223) 0:02:19.178 ******* 2026-02-16 03:06:18.807784 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:06:18.807790 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:06:18.807796 | orchestrator | 2026-02-16 03:06:18.807803 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-16 03:06:18.807809 | orchestrator | Monday 16 February 2026 03:06:18 +0000 (0:00:00.364) 0:02:19.543 ******* 2026-02-16 03:06:18.807815 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:06:18.807821 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:06:18.807827 | orchestrator | 2026-02-16 03:06:18.807834 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-16 03:06:18.807840 | orchestrator | Monday 16 February 2026 03:06:18 +0000 (0:00:00.213) 0:02:19.757 ******* 2026-02-16 03:06:18.807846 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:06:18.807852 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:06:18.807858 | orchestrator | 2026-02-16 03:06:18.807869 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-16 03:07:53.962248 | orchestrator | Monday 16 February 2026 03:06:18 +0000 (0:00:00.514) 0:02:20.271 ******* 2026-02-16 03:07:53.962429 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:07:53.962458 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:07:53.962478 | orchestrator | 2026-02-16 03:07:53.962499 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-16 03:07:53.962519 | orchestrator | Monday 16 February 2026 03:06:19 +0000 (0:00:01.057) 0:02:21.328 ******* 2026-02-16 03:07:53.962539 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:07:53.962557 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:07:53.962577 | orchestrator | 2026-02-16 03:07:53.962598 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-16 03:07:53.962617 | orchestrator | Monday 16 February 2026 03:06:20 +0000 (0:00:01.138) 0:02:22.467 ******* 2026-02-16 03:07:53.962633 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:07:53.962645 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:07:53.962655 | orchestrator | 2026-02-16 03:07:53.962667 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-16 03:07:53.962678 | orchestrator | 2026-02-16 03:07:53.962689 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-16 03:07:53.962700 | orchestrator | Monday 16 February 2026 03:06:30 +0000 (0:00:09.955) 0:02:32.422 ******* 2026-02-16 03:07:53.962710 | orchestrator | ok: [testbed-manager] 2026-02-16 03:07:53.962722 | orchestrator | 2026-02-16 03:07:53.962733 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-16 03:07:53.962744 | orchestrator | Monday 16 February 2026 03:06:31 +0000 (0:00:00.783) 0:02:33.206 ******* 2026-02-16 03:07:53.962758 | orchestrator | changed: [testbed-manager] 2026-02-16 03:07:53.962771 | orchestrator | 2026-02-16 03:07:53.962783 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-16 03:07:53.962795 | orchestrator | Monday 16 February 2026 03:06:32 +0000 (0:00:00.426) 0:02:33.632 ******* 2026-02-16 03:07:53.962808 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-16 03:07:53.962821 | orchestrator | 2026-02-16 03:07:53.962892 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-16 03:07:53.962906 | orchestrator | Monday 16 February 2026 03:06:32 +0000 (0:00:00.509) 0:02:34.142 ******* 2026-02-16 03:07:53.962919 | orchestrator | changed: [testbed-manager] 2026-02-16 03:07:53.962932 | orchestrator | 2026-02-16 03:07:53.962944 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-16 03:07:53.962956 | orchestrator | Monday 16 February 2026 03:06:33 +0000 (0:00:00.848) 0:02:34.991 ******* 2026-02-16 03:07:53.962969 | orchestrator | changed: [testbed-manager] 2026-02-16 03:07:53.962982 | orchestrator | 2026-02-16 03:07:53.962995 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-16 03:07:53.963006 | orchestrator | Monday 16 February 2026 03:06:34 +0000 (0:00:00.542) 0:02:35.534 ******* 2026-02-16 03:07:53.963017 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-16 03:07:53.963028 | orchestrator | 2026-02-16 03:07:53.963039 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-16 03:07:53.963050 | orchestrator | Monday 16 February 2026 03:06:35 +0000 (0:00:01.506) 0:02:37.041 ******* 2026-02-16 03:07:53.963062 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-16 03:07:53.963072 | orchestrator | 2026-02-16 03:07:53.963083 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-16 03:07:53.963094 | orchestrator | Monday 16 February 2026 03:06:36 +0000 (0:00:00.788) 0:02:37.829 ******* 2026-02-16 03:07:53.963105 | orchestrator | changed: [testbed-manager] 2026-02-16 03:07:53.963116 | orchestrator | 2026-02-16 03:07:53.963127 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-16 03:07:53.963138 | orchestrator | Monday 16 February 2026 03:06:36 +0000 (0:00:00.591) 0:02:38.421 ******* 2026-02-16 03:07:53.963149 | orchestrator | changed: [testbed-manager] 2026-02-16 03:07:53.963159 | orchestrator | 2026-02-16 03:07:53.963170 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-16 03:07:53.963181 | orchestrator | 2026-02-16 03:07:53.963191 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-16 03:07:53.963203 | orchestrator | Monday 16 February 2026 03:06:37 +0000 (0:00:00.435) 0:02:38.856 ******* 2026-02-16 03:07:53.963213 | orchestrator | ok: [testbed-manager] 2026-02-16 03:07:53.963225 | orchestrator | 2026-02-16 03:07:53.963253 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-16 03:07:53.963265 | orchestrator | Monday 16 February 2026 03:06:37 +0000 (0:00:00.147) 0:02:39.004 ******* 2026-02-16 03:07:53.963276 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-16 03:07:53.963310 | orchestrator | 2026-02-16 03:07:53.963330 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-16 03:07:53.963349 | orchestrator | Monday 16 February 2026 03:06:37 +0000 (0:00:00.233) 0:02:39.238 ******* 2026-02-16 03:07:53.963368 | orchestrator | ok: [testbed-manager] 2026-02-16 03:07:53.963386 | orchestrator | 2026-02-16 03:07:53.963400 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-16 03:07:53.963411 | orchestrator | Monday 16 February 2026 03:06:38 +0000 (0:00:00.808) 0:02:40.047 ******* 2026-02-16 03:07:53.963422 | orchestrator | ok: [testbed-manager] 2026-02-16 03:07:53.963432 | orchestrator | 2026-02-16 03:07:53.963443 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-16 03:07:53.963458 | orchestrator | Monday 16 February 2026 03:06:39 +0000 (0:00:01.379) 0:02:41.427 ******* 2026-02-16 03:07:53.963476 | orchestrator | changed: [testbed-manager] 2026-02-16 03:07:53.963493 | orchestrator | 2026-02-16 03:07:53.963511 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-16 03:07:53.963531 | orchestrator | Monday 16 February 2026 03:06:40 +0000 (0:00:00.834) 0:02:42.261 ******* 2026-02-16 03:07:53.963549 | orchestrator | ok: [testbed-manager] 2026-02-16 03:07:53.963564 | orchestrator | 2026-02-16 03:07:53.963576 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-16 03:07:53.963599 | orchestrator | Monday 16 February 2026 03:06:41 +0000 (0:00:00.432) 0:02:42.693 ******* 2026-02-16 03:07:53.963610 | orchestrator | changed: [testbed-manager] 2026-02-16 03:07:53.963640 | orchestrator | 2026-02-16 03:07:53.963652 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-16 03:07:53.963663 | orchestrator | Monday 16 February 2026 03:06:48 +0000 (0:00:07.204) 0:02:49.898 ******* 2026-02-16 03:07:53.963674 | orchestrator | changed: [testbed-manager] 2026-02-16 03:07:53.963684 | orchestrator | 2026-02-16 03:07:53.963695 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-16 03:07:53.963706 | orchestrator | Monday 16 February 2026 03:07:00 +0000 (0:00:12.075) 0:03:01.973 ******* 2026-02-16 03:07:53.963717 | orchestrator | ok: [testbed-manager] 2026-02-16 03:07:53.963727 | orchestrator | 2026-02-16 03:07:53.963738 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-16 03:07:53.963749 | orchestrator | 2026-02-16 03:07:53.963760 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-16 03:07:53.963770 | orchestrator | Monday 16 February 2026 03:07:00 +0000 (0:00:00.496) 0:03:02.470 ******* 2026-02-16 03:07:53.963781 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:07:53.963792 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:07:53.963803 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:07:53.963813 | orchestrator | 2026-02-16 03:07:53.963824 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-16 03:07:53.963835 | orchestrator | Monday 16 February 2026 03:07:01 +0000 (0:00:00.279) 0:03:02.749 ******* 2026-02-16 03:07:53.963852 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:07:53.963869 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:07:53.963886 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:07:53.963905 | orchestrator | 2026-02-16 03:07:53.963922 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-16 03:07:53.963940 | orchestrator | Monday 16 February 2026 03:07:01 +0000 (0:00:00.266) 0:03:03.016 ******* 2026-02-16 03:07:53.963958 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:07:53.963978 | orchestrator | 2026-02-16 03:07:53.963996 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-16 03:07:53.964015 | orchestrator | Monday 16 February 2026 03:07:02 +0000 (0:00:00.638) 0:03:03.655 ******* 2026-02-16 03:07:53.964026 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-16 03:07:53.964037 | orchestrator | 2026-02-16 03:07:53.964048 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-16 03:07:53.964059 | orchestrator | Monday 16 February 2026 03:07:02 +0000 (0:00:00.763) 0:03:04.418 ******* 2026-02-16 03:07:53.964070 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 03:07:53.964080 | orchestrator | 2026-02-16 03:07:53.964091 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-16 03:07:53.964102 | orchestrator | Monday 16 February 2026 03:07:03 +0000 (0:00:00.790) 0:03:05.208 ******* 2026-02-16 03:07:53.964113 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:07:53.964128 | orchestrator | 2026-02-16 03:07:53.964146 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-16 03:07:53.964166 | orchestrator | Monday 16 February 2026 03:07:03 +0000 (0:00:00.127) 0:03:05.336 ******* 2026-02-16 03:07:53.964184 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 03:07:53.964202 | orchestrator | 2026-02-16 03:07:53.964222 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-16 03:07:53.964241 | orchestrator | Monday 16 February 2026 03:07:04 +0000 (0:00:00.957) 0:03:06.294 ******* 2026-02-16 03:07:53.964259 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:07:53.964270 | orchestrator | 2026-02-16 03:07:53.964281 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-16 03:07:53.964321 | orchestrator | Monday 16 February 2026 03:07:04 +0000 (0:00:00.118) 0:03:06.412 ******* 2026-02-16 03:07:53.964343 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:07:53.964354 | orchestrator | 2026-02-16 03:07:53.964365 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-16 03:07:53.964376 | orchestrator | Monday 16 February 2026 03:07:05 +0000 (0:00:00.100) 0:03:06.512 ******* 2026-02-16 03:07:53.964387 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:07:53.964397 | orchestrator | 2026-02-16 03:07:53.964408 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-16 03:07:53.964419 | orchestrator | Monday 16 February 2026 03:07:05 +0000 (0:00:00.127) 0:03:06.639 ******* 2026-02-16 03:07:53.964430 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:07:53.964441 | orchestrator | 2026-02-16 03:07:53.964452 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-16 03:07:53.964462 | orchestrator | Monday 16 February 2026 03:07:05 +0000 (0:00:00.121) 0:03:06.761 ******* 2026-02-16 03:07:53.964473 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-16 03:07:53.964484 | orchestrator | 2026-02-16 03:07:53.964495 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-16 03:07:53.964505 | orchestrator | Monday 16 February 2026 03:07:10 +0000 (0:00:05.405) 0:03:12.167 ******* 2026-02-16 03:07:53.964516 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-16 03:07:53.964535 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-16 03:07:53.964556 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-16 03:07:53.964586 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-16 03:07:53.964600 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-16 03:07:53.964611 | orchestrator | 2026-02-16 03:07:53.964622 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-16 03:07:53.964633 | orchestrator | Monday 16 February 2026 03:07:52 +0000 (0:00:42.106) 0:03:54.274 ******* 2026-02-16 03:07:53.964644 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 03:07:53.964654 | orchestrator | 2026-02-16 03:07:53.964665 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-16 03:07:53.964688 | orchestrator | Monday 16 February 2026 03:07:53 +0000 (0:00:01.147) 0:03:55.421 ******* 2026-02-16 03:08:15.021439 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-16 03:08:15.021544 | orchestrator | 2026-02-16 03:08:15.021560 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-16 03:08:15.021573 | orchestrator | Monday 16 February 2026 03:07:55 +0000 (0:00:01.503) 0:03:56.924 ******* 2026-02-16 03:08:15.021584 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-16 03:08:15.021596 | orchestrator | 2026-02-16 03:08:15.021608 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-16 03:08:15.021619 | orchestrator | Monday 16 February 2026 03:07:56 +0000 (0:00:01.081) 0:03:58.006 ******* 2026-02-16 03:08:15.021630 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:08:15.021641 | orchestrator | 2026-02-16 03:08:15.021653 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-16 03:08:15.021664 | orchestrator | Monday 16 February 2026 03:07:56 +0000 (0:00:00.127) 0:03:58.133 ******* 2026-02-16 03:08:15.021674 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-16 03:08:15.021687 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-16 03:08:15.021697 | orchestrator | 2026-02-16 03:08:15.021708 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-16 03:08:15.021719 | orchestrator | Monday 16 February 2026 03:07:58 +0000 (0:00:01.764) 0:03:59.898 ******* 2026-02-16 03:08:15.021730 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:08:15.021741 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:08:15.021752 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:08:15.021787 | orchestrator | 2026-02-16 03:08:15.021799 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-16 03:08:15.021810 | orchestrator | Monday 16 February 2026 03:07:58 +0000 (0:00:00.311) 0:04:00.209 ******* 2026-02-16 03:08:15.021821 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:08:15.021832 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:08:15.021843 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:08:15.021853 | orchestrator | 2026-02-16 03:08:15.021864 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-16 03:08:15.021875 | orchestrator | 2026-02-16 03:08:15.021886 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-16 03:08:15.021896 | orchestrator | Monday 16 February 2026 03:07:59 +0000 (0:00:01.014) 0:04:01.224 ******* 2026-02-16 03:08:15.021907 | orchestrator | ok: [testbed-manager] 2026-02-16 03:08:15.021918 | orchestrator | 2026-02-16 03:08:15.021931 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-16 03:08:15.021944 | orchestrator | Monday 16 February 2026 03:07:59 +0000 (0:00:00.148) 0:04:01.372 ******* 2026-02-16 03:08:15.021957 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-16 03:08:15.021970 | orchestrator | 2026-02-16 03:08:15.021982 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-16 03:08:15.021995 | orchestrator | Monday 16 February 2026 03:08:00 +0000 (0:00:00.246) 0:04:01.618 ******* 2026-02-16 03:08:15.022008 | orchestrator | changed: [testbed-manager] 2026-02-16 03:08:15.022105 | orchestrator | 2026-02-16 03:08:15.022125 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-16 03:08:15.022145 | orchestrator | 2026-02-16 03:08:15.022166 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-16 03:08:15.022187 | orchestrator | Monday 16 February 2026 03:08:06 +0000 (0:00:06.242) 0:04:07.860 ******* 2026-02-16 03:08:15.022207 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:08:15.022224 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:08:15.022237 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:08:15.022249 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:08:15.022262 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:08:15.022275 | orchestrator | 2026-02-16 03:08:15.022288 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-16 03:08:15.022299 | orchestrator | Monday 16 February 2026 03:08:07 +0000 (0:00:00.649) 0:04:08.510 ******* 2026-02-16 03:08:15.022311 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-16 03:08:15.022348 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-16 03:08:15.022359 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-16 03:08:15.022370 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-16 03:08:15.022381 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-16 03:08:15.022392 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-16 03:08:15.022403 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-16 03:08:15.022414 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-16 03:08:15.022425 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-16 03:08:15.022436 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-16 03:08:15.022446 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-16 03:08:15.022457 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-16 03:08:15.022468 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-16 03:08:15.022490 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-16 03:08:15.022501 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-16 03:08:15.022533 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-16 03:08:15.022544 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-16 03:08:15.022556 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-16 03:08:15.022566 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-16 03:08:15.022577 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-16 03:08:15.022588 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-16 03:08:15.022599 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-16 03:08:15.022609 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-16 03:08:15.022620 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-16 03:08:15.022649 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-16 03:08:15.022660 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-16 03:08:15.022671 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-16 03:08:15.022682 | orchestrator | 2026-02-16 03:08:15.022694 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-16 03:08:15.022705 | orchestrator | Monday 16 February 2026 03:08:14 +0000 (0:00:06.990) 0:04:15.501 ******* 2026-02-16 03:08:15.022716 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:08:15.022727 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:08:15.022738 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:08:15.022749 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:08:15.022760 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:08:15.022771 | orchestrator | 2026-02-16 03:08:15.022782 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-16 03:08:15.022793 | orchestrator | Monday 16 February 2026 03:08:14 +0000 (0:00:00.416) 0:04:15.917 ******* 2026-02-16 03:08:15.022804 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:08:15.022815 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:08:15.022825 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:08:15.022836 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:08:15.022847 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:08:15.022857 | orchestrator | 2026-02-16 03:08:15.022868 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:08:15.022880 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 03:08:15.022893 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-16 03:08:15.022905 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-16 03:08:15.022916 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-16 03:08:15.022927 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-16 03:08:15.022944 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-16 03:08:15.022962 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-16 03:08:15.022973 | orchestrator | 2026-02-16 03:08:15.022984 | orchestrator | 2026-02-16 03:08:15.022995 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:08:15.023006 | orchestrator | Monday 16 February 2026 03:08:14 +0000 (0:00:00.553) 0:04:16.470 ******* 2026-02-16 03:08:15.023017 | orchestrator | =============================================================================== 2026-02-16 03:08:15.023028 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.64s 2026-02-16 03:08:15.023039 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.11s 2026-02-16 03:08:15.023050 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.51s 2026-02-16 03:08:15.023061 | orchestrator | kubectl : Install required packages ------------------------------------ 12.08s 2026-02-16 03:08:15.023072 | orchestrator | k3s_download : Download k3s binary x64 --------------------------------- 11.40s 2026-02-16 03:08:15.023083 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.96s 2026-02-16 03:08:15.023093 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.20s 2026-02-16 03:08:15.023104 | orchestrator | Manage labels ----------------------------------------------------------- 6.99s 2026-02-16 03:08:15.023115 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.24s 2026-02-16 03:08:15.023126 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.41s 2026-02-16 03:08:15.023143 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.18s 2026-02-16 03:08:15.361948 | orchestrator | 2026-02-16 03:08:15 | INFO  | Task 84e5b44e-671b-461a-922f-4f0f64bc7c55 (kubernetes) was prepared for execution. 2026-02-16 03:08:15.362111 | orchestrator | 2026-02-16 03:08:15 | INFO  | It takes a moment until task 84e5b44e-671b-461a-922f-4f0f64bc7c55 (kubernetes) has been started and output is visible here. 2026-02-16 03:08:33.074592 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.01s 2026-02-16 03:08:33.074710 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.76s 2026-02-16 03:08:33.074736 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.76s 2026-02-16 03:08:33.074756 | orchestrator | k3s_server : Stop k3s --------------------------------------------------- 1.68s 2026-02-16 03:08:33.074774 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.64s 2026-02-16 03:08:33.074791 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.51s 2026-02-16 03:08:33.074809 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.50s 2026-02-16 03:08:33.074826 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.44s 2026-02-16 03:08:33.074845 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.38s 2026-02-16 03:08:33.074863 | orchestrator | 2026-02-16 03:08:33.074885 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-16 03:08:33.074904 | orchestrator | 2026-02-16 03:08:33.074924 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-16 03:08:33.074937 | orchestrator | Monday 16 February 2026 03:08:19 +0000 (0:00:00.112) 0:00:00.112 ******* 2026-02-16 03:08:33.074948 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:08:33.074960 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:08:33.074971 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:08:33.074981 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:08:33.074992 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:08:33.075003 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:08:33.075014 | orchestrator | 2026-02-16 03:08:33.075025 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-16 03:08:33.075065 | orchestrator | Monday 16 February 2026 03:08:19 +0000 (0:00:00.468) 0:00:00.580 ******* 2026-02-16 03:08:33.075077 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:08:33.075088 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:08:33.075099 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:08:33.075110 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:08:33.075121 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:08:33.075135 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:08:33.075148 | orchestrator | 2026-02-16 03:08:33.075161 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-16 03:08:33.075174 | orchestrator | Monday 16 February 2026 03:08:20 +0000 (0:00:00.394) 0:00:00.975 ******* 2026-02-16 03:08:33.075186 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:08:33.075199 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:08:33.075212 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:08:33.075224 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:08:33.075237 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:08:33.075250 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:08:33.075262 | orchestrator | 2026-02-16 03:08:33.075275 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-16 03:08:33.075287 | orchestrator | Monday 16 February 2026 03:08:20 +0000 (0:00:00.443) 0:00:01.419 ******* 2026-02-16 03:08:33.075297 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:08:33.075308 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:08:33.075319 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:08:33.075330 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:08:33.075341 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:08:33.075384 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:08:33.075396 | orchestrator | 2026-02-16 03:08:33.075407 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-16 03:08:33.075440 | orchestrator | Monday 16 February 2026 03:08:21 +0000 (0:00:00.755) 0:00:02.175 ******* 2026-02-16 03:08:33.075452 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:08:33.075463 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:08:33.075474 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:08:33.075484 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:08:33.075495 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:08:33.075506 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:08:33.075517 | orchestrator | 2026-02-16 03:08:33.075528 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-16 03:08:33.075539 | orchestrator | Monday 16 February 2026 03:08:22 +0000 (0:00:00.929) 0:00:03.104 ******* 2026-02-16 03:08:33.075550 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:08:33.075561 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:08:33.075571 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:08:33.075582 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:08:33.075593 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:08:33.075604 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:08:33.075615 | orchestrator | 2026-02-16 03:08:33.075626 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-16 03:08:33.075637 | orchestrator | Monday 16 February 2026 03:08:23 +0000 (0:00:00.835) 0:00:03.940 ******* 2026-02-16 03:08:33.075648 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:08:33.075659 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:08:33.075669 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:08:33.075680 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:08:33.075691 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:08:33.075702 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:08:33.075713 | orchestrator | 2026-02-16 03:08:33.075724 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-16 03:08:33.075735 | orchestrator | Monday 16 February 2026 03:08:23 +0000 (0:00:00.449) 0:00:04.390 ******* 2026-02-16 03:08:33.075746 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:08:33.075756 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:08:33.075775 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:08:33.075786 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:08:33.075797 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:08:33.075808 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:08:33.075819 | orchestrator | 2026-02-16 03:08:33.075830 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-16 03:08:33.075841 | orchestrator | Monday 16 February 2026 03:08:24 +0000 (0:00:00.567) 0:00:04.957 ******* 2026-02-16 03:08:33.075852 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 03:08:33.075885 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 03:08:33.075905 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:08:33.075923 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 03:08:33.075942 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 03:08:33.075960 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:08:33.075977 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 03:08:33.075996 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 03:08:33.076015 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:08:33.076033 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 03:08:33.076052 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 03:08:33.076070 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:08:33.076088 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 03:08:33.076100 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 03:08:33.076110 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:08:33.076121 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 03:08:33.076132 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 03:08:33.076143 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:08:33.076153 | orchestrator | 2026-02-16 03:08:33.076164 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-16 03:08:33.076175 | orchestrator | Monday 16 February 2026 03:08:24 +0000 (0:00:00.495) 0:00:05.452 ******* 2026-02-16 03:08:33.076186 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:08:33.076197 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:08:33.076207 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:08:33.076218 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:08:33.076229 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:08:33.076239 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:08:33.076250 | orchestrator | 2026-02-16 03:08:33.076261 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-16 03:08:33.076272 | orchestrator | Monday 16 February 2026 03:08:25 +0000 (0:00:00.898) 0:00:06.351 ******* 2026-02-16 03:08:33.076283 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:08:33.076294 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:08:33.076305 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:08:33.076316 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:08:33.076327 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:08:33.076338 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:08:33.076373 | orchestrator | 2026-02-16 03:08:33.076386 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-16 03:08:33.076397 | orchestrator | Monday 16 February 2026 03:08:26 +0000 (0:00:00.698) 0:00:07.049 ******* 2026-02-16 03:08:33.076407 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:08:33.076418 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:08:33.076429 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:08:33.076439 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:08:33.076459 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:08:33.076470 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:08:33.076481 | orchestrator | 2026-02-16 03:08:33.076492 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-16 03:08:33.076503 | orchestrator | Monday 16 February 2026 03:08:30 +0000 (0:00:03.999) 0:00:11.049 ******* 2026-02-16 03:08:33.076514 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:08:33.076525 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:08:33.076535 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:08:33.076546 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:08:33.076557 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:08:33.076567 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:08:33.076578 | orchestrator | 2026-02-16 03:08:33.076589 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-16 03:08:33.076599 | orchestrator | Monday 16 February 2026 03:08:31 +0000 (0:00:00.732) 0:00:11.781 ******* 2026-02-16 03:08:33.076610 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:08:33.076621 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:08:33.076631 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:08:33.076642 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:08:33.076653 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:08:33.076663 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:08:33.076674 | orchestrator | 2026-02-16 03:08:33.076685 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-16 03:08:33.076696 | orchestrator | Monday 16 February 2026 03:08:32 +0000 (0:00:01.045) 0:00:12.827 ******* 2026-02-16 03:08:33.076707 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:08:33.076717 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:08:33.076728 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:08:33.076739 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:08:33.076749 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:08:33.076760 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:08:33.076771 | orchestrator | 2026-02-16 03:08:33.076782 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-16 03:08:33.076792 | orchestrator | Monday 16 February 2026 03:08:32 +0000 (0:00:00.517) 0:00:13.345 ******* 2026-02-16 03:08:33.076803 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-16 03:08:33.076814 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-16 03:08:33.076825 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:08:33.076836 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-16 03:08:33.076846 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-16 03:08:33.076857 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:08:33.076868 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-16 03:08:33.076887 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-16 03:09:37.146678 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:09:37.146820 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-16 03:09:37.146850 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-16 03:09:37.146869 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:09:37.146889 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-16 03:09:37.146901 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-16 03:09:37.146912 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:09:37.146923 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-16 03:09:37.146934 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-16 03:09:37.146945 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:09:37.146956 | orchestrator | 2026-02-16 03:09:37.146968 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-16 03:09:37.146981 | orchestrator | Monday 16 February 2026 03:08:33 +0000 (0:00:00.720) 0:00:14.065 ******* 2026-02-16 03:09:37.147017 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:09:37.147029 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:09:37.147039 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:09:37.147068 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:09:37.147090 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:09:37.147101 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:09:37.147111 | orchestrator | 2026-02-16 03:09:37.147122 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-16 03:09:37.147134 | orchestrator | Monday 16 February 2026 03:08:33 +0000 (0:00:00.547) 0:00:14.613 ******* 2026-02-16 03:09:37.147163 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:09:37.147175 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:09:37.147188 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:09:37.147201 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:09:37.147213 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:09:37.147225 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:09:37.147237 | orchestrator | 2026-02-16 03:09:37.147250 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-16 03:09:37.147263 | orchestrator | 2026-02-16 03:09:37.147275 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-16 03:09:37.147288 | orchestrator | Monday 16 February 2026 03:08:35 +0000 (0:00:01.075) 0:00:15.689 ******* 2026-02-16 03:09:37.147300 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:09:37.147313 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:09:37.147325 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:09:37.147338 | orchestrator | 2026-02-16 03:09:37.147350 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-16 03:09:37.147362 | orchestrator | Monday 16 February 2026 03:08:35 +0000 (0:00:00.962) 0:00:16.651 ******* 2026-02-16 03:09:37.147374 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:09:37.147387 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:09:37.147399 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:09:37.147411 | orchestrator | 2026-02-16 03:09:37.147425 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-16 03:09:37.147438 | orchestrator | Monday 16 February 2026 03:08:37 +0000 (0:00:01.131) 0:00:17.783 ******* 2026-02-16 03:09:37.147479 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:09:37.147490 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:09:37.147501 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:09:37.147512 | orchestrator | 2026-02-16 03:09:37.147522 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-16 03:09:37.147533 | orchestrator | Monday 16 February 2026 03:08:38 +0000 (0:00:01.003) 0:00:18.787 ******* 2026-02-16 03:09:37.147552 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:09:37.147563 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:09:37.147573 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:09:37.147584 | orchestrator | 2026-02-16 03:09:37.147595 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-16 03:09:37.147606 | orchestrator | Monday 16 February 2026 03:08:38 +0000 (0:00:00.598) 0:00:19.385 ******* 2026-02-16 03:09:37.147617 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:09:37.147627 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:09:37.147638 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:09:37.147649 | orchestrator | 2026-02-16 03:09:37.147660 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-16 03:09:37.147670 | orchestrator | Monday 16 February 2026 03:08:38 +0000 (0:00:00.295) 0:00:19.680 ******* 2026-02-16 03:09:37.147681 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:09:37.147692 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:09:37.147702 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:09:37.147713 | orchestrator | 2026-02-16 03:09:37.147724 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-16 03:09:37.147735 | orchestrator | Monday 16 February 2026 03:08:39 +0000 (0:00:00.821) 0:00:20.502 ******* 2026-02-16 03:09:37.147754 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:09:37.147765 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:09:37.147775 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:09:37.147786 | orchestrator | 2026-02-16 03:09:37.147797 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-16 03:09:37.147808 | orchestrator | Monday 16 February 2026 03:08:40 +0000 (0:00:01.088) 0:00:21.591 ******* 2026-02-16 03:09:37.147819 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:09:37.147830 | orchestrator | 2026-02-16 03:09:37.147841 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-16 03:09:37.147852 | orchestrator | Monday 16 February 2026 03:08:41 +0000 (0:00:00.450) 0:00:22.041 ******* 2026-02-16 03:09:37.147862 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:09:37.147873 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:09:37.147884 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:09:37.147894 | orchestrator | 2026-02-16 03:09:37.147905 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-16 03:09:37.147916 | orchestrator | Monday 16 February 2026 03:08:42 +0000 (0:00:01.239) 0:00:23.281 ******* 2026-02-16 03:09:37.147927 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:09:37.147938 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:09:37.147948 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:09:37.147959 | orchestrator | 2026-02-16 03:09:37.147988 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-16 03:09:37.148000 | orchestrator | Monday 16 February 2026 03:08:43 +0000 (0:00:00.583) 0:00:23.865 ******* 2026-02-16 03:09:37.148011 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:09:37.148022 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:09:37.148033 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:09:37.148043 | orchestrator | 2026-02-16 03:09:37.148054 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-16 03:09:37.148065 | orchestrator | Monday 16 February 2026 03:08:43 +0000 (0:00:00.747) 0:00:24.613 ******* 2026-02-16 03:09:37.148076 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:09:37.148087 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:09:37.148097 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:09:37.148108 | orchestrator | 2026-02-16 03:09:37.148119 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-16 03:09:37.148129 | orchestrator | Monday 16 February 2026 03:08:45 +0000 (0:00:01.369) 0:00:25.983 ******* 2026-02-16 03:09:37.148140 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:09:37.148151 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:09:37.148161 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:09:37.148172 | orchestrator | 2026-02-16 03:09:37.148183 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-16 03:09:37.148194 | orchestrator | Monday 16 February 2026 03:08:45 +0000 (0:00:00.461) 0:00:26.444 ******* 2026-02-16 03:09:37.148204 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:09:37.148215 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:09:37.148226 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:09:37.148236 | orchestrator | 2026-02-16 03:09:37.148247 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-16 03:09:37.148258 | orchestrator | Monday 16 February 2026 03:08:46 +0000 (0:00:00.288) 0:00:26.733 ******* 2026-02-16 03:09:37.148269 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:09:37.148279 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:09:37.148290 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:09:37.148301 | orchestrator | 2026-02-16 03:09:37.148312 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-16 03:09:37.148322 | orchestrator | Monday 16 February 2026 03:08:47 +0000 (0:00:00.990) 0:00:27.723 ******* 2026-02-16 03:09:37.148333 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:09:37.148344 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:09:37.148361 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:09:37.148372 | orchestrator | 2026-02-16 03:09:37.148383 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-16 03:09:37.148394 | orchestrator | Monday 16 February 2026 03:08:47 +0000 (0:00:00.799) 0:00:28.523 ******* 2026-02-16 03:09:37.148405 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:09:37.148415 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:09:37.148426 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:09:37.148437 | orchestrator | 2026-02-16 03:09:37.148529 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-16 03:09:37.148542 | orchestrator | Monday 16 February 2026 03:08:48 +0000 (0:00:00.315) 0:00:28.839 ******* 2026-02-16 03:09:37.148553 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-16 03:09:37.148566 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-16 03:09:37.148577 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-16 03:09:37.148587 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-16 03:09:37.148598 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-16 03:09:37.148609 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-16 03:09:37.148619 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:09:37.148630 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:09:37.148641 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:09:37.148652 | orchestrator | 2026-02-16 03:09:37.148662 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-16 03:09:37.148673 | orchestrator | Monday 16 February 2026 03:09:10 +0000 (0:00:22.150) 0:00:50.990 ******* 2026-02-16 03:09:37.148684 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:09:37.148695 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:09:37.148705 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:09:37.148716 | orchestrator | 2026-02-16 03:09:37.148727 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-16 03:09:37.148738 | orchestrator | Monday 16 February 2026 03:09:10 +0000 (0:00:00.281) 0:00:51.271 ******* 2026-02-16 03:09:37.148748 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:09:37.148759 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:09:37.148770 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:09:37.148780 | orchestrator | 2026-02-16 03:09:37.148791 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-16 03:09:37.148802 | orchestrator | Monday 16 February 2026 03:09:11 +0000 (0:00:00.973) 0:00:52.245 ******* 2026-02-16 03:09:37.148812 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:09:37.148823 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:09:37.148834 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:09:37.148844 | orchestrator | 2026-02-16 03:09:37.148855 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-16 03:09:37.148866 | orchestrator | Monday 16 February 2026 03:09:12 +0000 (0:00:01.128) 0:00:53.373 ******* 2026-02-16 03:09:37.148877 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:09:37.148887 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:09:37.148898 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:09:37.148909 | orchestrator | 2026-02-16 03:09:37.148927 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-16 03:09:53.064070 | orchestrator | Monday 16 February 2026 03:09:37 +0000 (0:00:24.438) 0:01:17.812 ******* 2026-02-16 03:09:53.064184 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:09:53.064223 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:09:53.064236 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:09:53.064247 | orchestrator | 2026-02-16 03:09:53.064260 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-16 03:09:53.064271 | orchestrator | Monday 16 February 2026 03:09:37 +0000 (0:00:00.580) 0:01:18.392 ******* 2026-02-16 03:09:53.064282 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:09:53.064293 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:09:53.064303 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:09:53.064314 | orchestrator | 2026-02-16 03:09:53.064326 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-16 03:09:53.064352 | orchestrator | Monday 16 February 2026 03:09:38 +0000 (0:00:00.650) 0:01:19.043 ******* 2026-02-16 03:09:53.064364 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:09:53.064375 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:09:53.064386 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:09:53.064397 | orchestrator | 2026-02-16 03:09:53.064408 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-16 03:09:53.064419 | orchestrator | Monday 16 February 2026 03:09:38 +0000 (0:00:00.598) 0:01:19.641 ******* 2026-02-16 03:09:53.064430 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:09:53.064440 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:09:53.064451 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:09:53.064462 | orchestrator | 2026-02-16 03:09:53.064501 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-16 03:09:53.064512 | orchestrator | Monday 16 February 2026 03:09:39 +0000 (0:00:00.755) 0:01:20.396 ******* 2026-02-16 03:09:53.064523 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:09:53.064534 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:09:53.064545 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:09:53.064555 | orchestrator | 2026-02-16 03:09:53.064566 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-16 03:09:53.064577 | orchestrator | Monday 16 February 2026 03:09:40 +0000 (0:00:00.295) 0:01:20.692 ******* 2026-02-16 03:09:53.064588 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:09:53.064599 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:09:53.064611 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:09:53.064623 | orchestrator | 2026-02-16 03:09:53.064637 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-16 03:09:53.064651 | orchestrator | Monday 16 February 2026 03:09:40 +0000 (0:00:00.628) 0:01:21.321 ******* 2026-02-16 03:09:53.064664 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:09:53.064677 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:09:53.064688 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:09:53.064699 | orchestrator | 2026-02-16 03:09:53.064710 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-16 03:09:53.064721 | orchestrator | Monday 16 February 2026 03:09:41 +0000 (0:00:00.625) 0:01:21.946 ******* 2026-02-16 03:09:53.064732 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:09:53.064743 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:09:53.064753 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:09:53.064764 | orchestrator | 2026-02-16 03:09:53.064775 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-16 03:09:53.064786 | orchestrator | Monday 16 February 2026 03:09:42 +0000 (0:00:00.864) 0:01:22.810 ******* 2026-02-16 03:09:53.064797 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:09:53.064807 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:09:53.064818 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:09:53.064828 | orchestrator | 2026-02-16 03:09:53.064844 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-16 03:09:53.064855 | orchestrator | Monday 16 February 2026 03:09:43 +0000 (0:00:01.030) 0:01:23.841 ******* 2026-02-16 03:09:53.064866 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:09:53.064877 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:09:53.064887 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:09:53.064907 | orchestrator | 2026-02-16 03:09:53.064918 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-16 03:09:53.064929 | orchestrator | Monday 16 February 2026 03:09:43 +0000 (0:00:00.264) 0:01:24.105 ******* 2026-02-16 03:09:53.064939 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:09:53.064950 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:09:53.064961 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:09:53.064972 | orchestrator | 2026-02-16 03:09:53.064983 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-16 03:09:53.064993 | orchestrator | Monday 16 February 2026 03:09:43 +0000 (0:00:00.259) 0:01:24.364 ******* 2026-02-16 03:09:53.065004 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:09:53.065015 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:09:53.065026 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:09:53.065036 | orchestrator | 2026-02-16 03:09:53.065047 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-16 03:09:53.065058 | orchestrator | Monday 16 February 2026 03:09:44 +0000 (0:00:00.632) 0:01:24.997 ******* 2026-02-16 03:09:53.065069 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:09:53.065080 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:09:53.065091 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:09:53.065101 | orchestrator | 2026-02-16 03:09:53.065113 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-16 03:09:53.065125 | orchestrator | Monday 16 February 2026 03:09:45 +0000 (0:00:00.812) 0:01:25.809 ******* 2026-02-16 03:09:53.065137 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-16 03:09:53.065148 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-16 03:09:53.065159 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-16 03:09:53.065170 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-16 03:09:53.065198 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-16 03:09:53.065210 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-16 03:09:53.065221 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-16 03:09:53.065233 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-16 03:09:53.065244 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-16 03:09:53.065255 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-16 03:09:53.065265 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-16 03:09:53.065276 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-16 03:09:53.065287 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-16 03:09:53.065297 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-16 03:09:53.065308 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-16 03:09:53.065319 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-16 03:09:53.065330 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-16 03:09:53.065340 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-16 03:09:53.065351 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-16 03:09:53.065362 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-16 03:09:53.065380 | orchestrator | 2026-02-16 03:09:53.065391 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-16 03:09:53.065402 | orchestrator | 2026-02-16 03:09:53.065413 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-16 03:09:53.065424 | orchestrator | Monday 16 February 2026 03:09:48 +0000 (0:00:03.114) 0:01:28.923 ******* 2026-02-16 03:09:53.065435 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:09:53.065446 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:09:53.065457 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:09:53.065485 | orchestrator | 2026-02-16 03:09:53.065497 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-16 03:09:53.065508 | orchestrator | Monday 16 February 2026 03:09:48 +0000 (0:00:00.300) 0:01:29.223 ******* 2026-02-16 03:09:53.065519 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:09:53.065530 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:09:53.065540 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:09:53.065551 | orchestrator | 2026-02-16 03:09:53.065562 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-16 03:09:53.065573 | orchestrator | Monday 16 February 2026 03:09:49 +0000 (0:00:00.818) 0:01:30.042 ******* 2026-02-16 03:09:53.065584 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:09:53.065594 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:09:53.065605 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:09:53.065616 | orchestrator | 2026-02-16 03:09:53.065627 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-16 03:09:53.065637 | orchestrator | Monday 16 February 2026 03:09:49 +0000 (0:00:00.301) 0:01:30.343 ******* 2026-02-16 03:09:53.065648 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:09:53.065659 | orchestrator | 2026-02-16 03:09:53.065670 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-16 03:09:53.065681 | orchestrator | Monday 16 February 2026 03:09:50 +0000 (0:00:00.446) 0:01:30.790 ******* 2026-02-16 03:09:53.065692 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:09:53.065703 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:09:53.065713 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:09:53.065724 | orchestrator | 2026-02-16 03:09:53.065735 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-16 03:09:53.065746 | orchestrator | Monday 16 February 2026 03:09:50 +0000 (0:00:00.476) 0:01:31.267 ******* 2026-02-16 03:09:53.065757 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:09:53.065767 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:09:53.065778 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:09:53.065789 | orchestrator | 2026-02-16 03:09:53.065799 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-16 03:09:53.065810 | orchestrator | Monday 16 February 2026 03:09:50 +0000 (0:00:00.287) 0:01:31.554 ******* 2026-02-16 03:09:53.065821 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:09:53.065832 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:09:53.065842 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:09:53.065853 | orchestrator | 2026-02-16 03:09:53.065864 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-16 03:09:53.065874 | orchestrator | Monday 16 February 2026 03:09:51 +0000 (0:00:00.288) 0:01:31.842 ******* 2026-02-16 03:09:53.065885 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:09:53.065896 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:09:53.065906 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:09:53.065917 | orchestrator | 2026-02-16 03:09:53.065928 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-16 03:09:53.065939 | orchestrator | Monday 16 February 2026 03:09:51 +0000 (0:00:00.595) 0:01:32.438 ******* 2026-02-16 03:09:53.065949 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:09:53.065960 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:09:53.065977 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:09:53.065988 | orchestrator | 2026-02-16 03:09:53.065999 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-16 03:09:53.066055 | orchestrator | Monday 16 February 2026 03:09:53 +0000 (0:00:01.287) 0:01:33.725 ******* 2026-02-16 03:10:44.087995 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:10:44.088108 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:10:44.088126 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:10:44.088138 | orchestrator | 2026-02-16 03:10:44.088151 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-16 03:10:44.088163 | orchestrator | Monday 16 February 2026 03:09:54 +0000 (0:00:01.161) 0:01:34.887 ******* 2026-02-16 03:10:44.088180 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:10:44.088198 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:10:44.088210 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:10:44.088221 | orchestrator | 2026-02-16 03:10:44.088232 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-16 03:10:44.088243 | orchestrator | 2026-02-16 03:10:44.088254 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-16 03:10:44.088266 | orchestrator | Monday 16 February 2026 03:10:05 +0000 (0:00:11.401) 0:01:46.289 ******* 2026-02-16 03:10:44.088276 | orchestrator | ok: [testbed-manager] 2026-02-16 03:10:44.088287 | orchestrator | 2026-02-16 03:10:44.088298 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-16 03:10:44.088309 | orchestrator | Monday 16 February 2026 03:10:06 +0000 (0:00:00.598) 0:01:46.887 ******* 2026-02-16 03:10:44.088320 | orchestrator | ok: [testbed-manager] 2026-02-16 03:10:44.088331 | orchestrator | 2026-02-16 03:10:44.088341 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-16 03:10:44.088371 | orchestrator | Monday 16 February 2026 03:10:06 +0000 (0:00:00.464) 0:01:47.352 ******* 2026-02-16 03:10:44.088383 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-16 03:10:44.088394 | orchestrator | 2026-02-16 03:10:44.088406 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-16 03:10:44.088417 | orchestrator | Monday 16 February 2026 03:10:07 +0000 (0:00:00.563) 0:01:47.916 ******* 2026-02-16 03:10:44.088427 | orchestrator | changed: [testbed-manager] 2026-02-16 03:10:44.088439 | orchestrator | 2026-02-16 03:10:44.088451 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-16 03:10:44.088465 | orchestrator | Monday 16 February 2026 03:10:08 +0000 (0:00:00.961) 0:01:48.877 ******* 2026-02-16 03:10:44.088477 | orchestrator | changed: [testbed-manager] 2026-02-16 03:10:44.088490 | orchestrator | 2026-02-16 03:10:44.088503 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-16 03:10:44.088515 | orchestrator | Monday 16 February 2026 03:10:08 +0000 (0:00:00.537) 0:01:49.415 ******* 2026-02-16 03:10:44.088528 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-16 03:10:44.088541 | orchestrator | 2026-02-16 03:10:44.088582 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-16 03:10:44.088595 | orchestrator | Monday 16 February 2026 03:10:10 +0000 (0:00:01.517) 0:01:50.932 ******* 2026-02-16 03:10:44.088607 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-16 03:10:44.088620 | orchestrator | 2026-02-16 03:10:44.088633 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-16 03:10:44.088646 | orchestrator | Monday 16 February 2026 03:10:11 +0000 (0:00:00.804) 0:01:51.737 ******* 2026-02-16 03:10:44.088658 | orchestrator | ok: [testbed-manager] 2026-02-16 03:10:44.088671 | orchestrator | 2026-02-16 03:10:44.088684 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-16 03:10:44.088702 | orchestrator | Monday 16 February 2026 03:10:11 +0000 (0:00:00.421) 0:01:52.158 ******* 2026-02-16 03:10:44.088715 | orchestrator | ok: [testbed-manager] 2026-02-16 03:10:44.088728 | orchestrator | 2026-02-16 03:10:44.088741 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-16 03:10:44.088776 | orchestrator | 2026-02-16 03:10:44.088789 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-16 03:10:44.088802 | orchestrator | Monday 16 February 2026 03:10:11 +0000 (0:00:00.422) 0:01:52.580 ******* 2026-02-16 03:10:44.088816 | orchestrator | ok: [testbed-manager] 2026-02-16 03:10:44.088828 | orchestrator | 2026-02-16 03:10:44.088839 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-16 03:10:44.088850 | orchestrator | Monday 16 February 2026 03:10:12 +0000 (0:00:00.352) 0:01:52.933 ******* 2026-02-16 03:10:44.088861 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-16 03:10:44.088873 | orchestrator | 2026-02-16 03:10:44.088883 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-16 03:10:44.088894 | orchestrator | Monday 16 February 2026 03:10:12 +0000 (0:00:00.217) 0:01:53.151 ******* 2026-02-16 03:10:44.088905 | orchestrator | ok: [testbed-manager] 2026-02-16 03:10:44.088916 | orchestrator | 2026-02-16 03:10:44.088926 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-16 03:10:44.088937 | orchestrator | Monday 16 February 2026 03:10:13 +0000 (0:00:00.793) 0:01:53.944 ******* 2026-02-16 03:10:44.088947 | orchestrator | ok: [testbed-manager] 2026-02-16 03:10:44.088958 | orchestrator | 2026-02-16 03:10:44.088969 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-16 03:10:44.088980 | orchestrator | Monday 16 February 2026 03:10:14 +0000 (0:00:01.738) 0:01:55.683 ******* 2026-02-16 03:10:44.088990 | orchestrator | ok: [testbed-manager] 2026-02-16 03:10:44.089001 | orchestrator | 2026-02-16 03:10:44.089012 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-16 03:10:44.089022 | orchestrator | Monday 16 February 2026 03:10:15 +0000 (0:00:00.484) 0:01:56.167 ******* 2026-02-16 03:10:44.089033 | orchestrator | ok: [testbed-manager] 2026-02-16 03:10:44.089044 | orchestrator | 2026-02-16 03:10:44.089054 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-16 03:10:44.089065 | orchestrator | Monday 16 February 2026 03:10:15 +0000 (0:00:00.446) 0:01:56.614 ******* 2026-02-16 03:10:44.089076 | orchestrator | ok: [testbed-manager] 2026-02-16 03:10:44.089086 | orchestrator | 2026-02-16 03:10:44.089097 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-16 03:10:44.089107 | orchestrator | Monday 16 February 2026 03:10:16 +0000 (0:00:00.608) 0:01:57.222 ******* 2026-02-16 03:10:44.089118 | orchestrator | ok: [testbed-manager] 2026-02-16 03:10:44.089129 | orchestrator | 2026-02-16 03:10:44.089140 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-16 03:10:44.089173 | orchestrator | Monday 16 February 2026 03:10:17 +0000 (0:00:01.281) 0:01:58.504 ******* 2026-02-16 03:10:44.089186 | orchestrator | ok: [testbed-manager] 2026-02-16 03:10:44.089196 | orchestrator | 2026-02-16 03:10:44.089208 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-16 03:10:44.089228 | orchestrator | 2026-02-16 03:10:44.089247 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-16 03:10:44.089274 | orchestrator | Monday 16 February 2026 03:10:18 +0000 (0:00:00.704) 0:01:59.208 ******* 2026-02-16 03:10:44.089294 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:10:44.089312 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:10:44.089330 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:10:44.089348 | orchestrator | 2026-02-16 03:10:44.089366 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-16 03:10:44.089384 | orchestrator | Monday 16 February 2026 03:10:18 +0000 (0:00:00.311) 0:01:59.519 ******* 2026-02-16 03:10:44.089403 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:10:44.089420 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:10:44.089439 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:10:44.089458 | orchestrator | 2026-02-16 03:10:44.089476 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-16 03:10:44.089495 | orchestrator | Monday 16 February 2026 03:10:19 +0000 (0:00:00.298) 0:01:59.818 ******* 2026-02-16 03:10:44.089529 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:10:44.089575 | orchestrator | 2026-02-16 03:10:44.089596 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-16 03:10:44.089614 | orchestrator | Monday 16 February 2026 03:10:19 +0000 (0:00:00.648) 0:02:00.467 ******* 2026-02-16 03:10:44.089638 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 03:10:44.089664 | orchestrator | 2026-02-16 03:10:44.089683 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-16 03:10:44.089702 | orchestrator | Monday 16 February 2026 03:10:20 +0000 (0:00:00.804) 0:02:01.271 ******* 2026-02-16 03:10:44.089721 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 03:10:44.089740 | orchestrator | 2026-02-16 03:10:44.089761 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-16 03:10:44.089780 | orchestrator | Monday 16 February 2026 03:10:21 +0000 (0:00:00.824) 0:02:02.095 ******* 2026-02-16 03:10:44.089799 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:10:44.089815 | orchestrator | 2026-02-16 03:10:44.089827 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-16 03:10:44.089838 | orchestrator | Monday 16 February 2026 03:10:21 +0000 (0:00:00.115) 0:02:02.210 ******* 2026-02-16 03:10:44.089849 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 03:10:44.089859 | orchestrator | 2026-02-16 03:10:44.089870 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-16 03:10:44.089881 | orchestrator | Monday 16 February 2026 03:10:22 +0000 (0:00:00.889) 0:02:03.100 ******* 2026-02-16 03:10:44.089892 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 03:10:44.089902 | orchestrator | 2026-02-16 03:10:44.089913 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-16 03:10:44.089924 | orchestrator | Monday 16 February 2026 03:10:23 +0000 (0:00:01.195) 0:02:04.295 ******* 2026-02-16 03:10:44.089943 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 03:10:44.089954 | orchestrator | 2026-02-16 03:10:44.089965 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-16 03:10:44.089975 | orchestrator | Monday 16 February 2026 03:10:23 +0000 (0:00:00.127) 0:02:04.422 ******* 2026-02-16 03:10:44.089986 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 03:10:44.089997 | orchestrator | 2026-02-16 03:10:44.090008 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-16 03:10:44.090074 | orchestrator | Monday 16 February 2026 03:10:23 +0000 (0:00:00.130) 0:02:04.553 ******* 2026-02-16 03:10:44.090086 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-02-16 03:10:44.090097 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-02-16 03:10:44.090110 | orchestrator | } 2026-02-16 03:10:44.090121 | orchestrator | 2026-02-16 03:10:44.090132 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-16 03:10:44.090143 | orchestrator | Monday 16 February 2026 03:10:23 +0000 (0:00:00.129) 0:02:04.683 ******* 2026-02-16 03:10:44.090154 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:10:44.090165 | orchestrator | 2026-02-16 03:10:44.090176 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-16 03:10:44.090187 | orchestrator | Monday 16 February 2026 03:10:24 +0000 (0:00:00.126) 0:02:04.809 ******* 2026-02-16 03:10:44.090198 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-16 03:10:44.090208 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-16 03:10:44.090220 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-16 03:10:44.090231 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-16 03:10:44.090241 | orchestrator | 2026-02-16 03:10:44.090252 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-16 03:10:44.090273 | orchestrator | Monday 16 February 2026 03:10:39 +0000 (0:00:14.993) 0:02:19.802 ******* 2026-02-16 03:10:44.090284 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 03:10:44.090295 | orchestrator | 2026-02-16 03:10:44.090305 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-16 03:10:44.090316 | orchestrator | Monday 16 February 2026 03:10:40 +0000 (0:00:01.183) 0:02:20.986 ******* 2026-02-16 03:10:44.090327 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 03:10:44.090338 | orchestrator | 2026-02-16 03:10:44.090349 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-16 03:10:44.090360 | orchestrator | Monday 16 February 2026 03:10:41 +0000 (0:00:01.680) 0:02:22.666 ******* 2026-02-16 03:10:44.090384 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-16 03:10:59.659735 | orchestrator | 2026-02-16 03:10:59.659830 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-16 03:10:59.659842 | orchestrator | Monday 16 February 2026 03:10:44 +0000 (0:00:02.086) 0:02:24.753 ******* 2026-02-16 03:10:59.659850 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:10:59.659859 | orchestrator | 2026-02-16 03:10:59.659867 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-16 03:10:59.659875 | orchestrator | Monday 16 February 2026 03:10:44 +0000 (0:00:00.124) 0:02:24.878 ******* 2026-02-16 03:10:59.659882 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-16 03:10:59.659891 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-16 03:10:59.659898 | orchestrator | 2026-02-16 03:10:59.659906 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-16 03:10:59.659913 | orchestrator | Monday 16 February 2026 03:10:45 +0000 (0:00:01.754) 0:02:26.632 ******* 2026-02-16 03:10:59.659921 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:10:59.659928 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:10:59.659935 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:10:59.659942 | orchestrator | 2026-02-16 03:10:59.659950 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-16 03:10:59.659957 | orchestrator | Monday 16 February 2026 03:10:46 +0000 (0:00:00.311) 0:02:26.944 ******* 2026-02-16 03:10:59.659965 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:10:59.659973 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:10:59.659980 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:10:59.659987 | orchestrator | 2026-02-16 03:10:59.659995 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-16 03:10:59.660002 | orchestrator | 2026-02-16 03:10:59.660009 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-16 03:10:59.660017 | orchestrator | Monday 16 February 2026 03:10:47 +0000 (0:00:00.805) 0:02:27.749 ******* 2026-02-16 03:10:59.660024 | orchestrator | ok: [testbed-manager] 2026-02-16 03:10:59.660032 | orchestrator | 2026-02-16 03:10:59.660039 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-16 03:10:59.660047 | orchestrator | Monday 16 February 2026 03:10:47 +0000 (0:00:00.302) 0:02:28.052 ******* 2026-02-16 03:10:59.660054 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-16 03:10:59.660062 | orchestrator | 2026-02-16 03:10:59.660069 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-16 03:10:59.660077 | orchestrator | Monday 16 February 2026 03:10:47 +0000 (0:00:00.212) 0:02:28.264 ******* 2026-02-16 03:10:59.660084 | orchestrator | ok: [testbed-manager] 2026-02-16 03:10:59.660091 | orchestrator | 2026-02-16 03:10:59.660099 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-16 03:10:59.660106 | orchestrator | 2026-02-16 03:10:59.660114 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-16 03:10:59.660121 | orchestrator | Monday 16 February 2026 03:10:50 +0000 (0:00:03.264) 0:02:31.529 ******* 2026-02-16 03:10:59.660143 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:10:59.660150 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:10:59.660157 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:10:59.660165 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:10:59.660172 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:10:59.660179 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:10:59.660186 | orchestrator | 2026-02-16 03:10:59.660193 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-16 03:10:59.660201 | orchestrator | Monday 16 February 2026 03:10:51 +0000 (0:00:00.712) 0:02:32.242 ******* 2026-02-16 03:10:59.660215 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-16 03:10:59.660223 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-16 03:10:59.660230 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-16 03:10:59.660238 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-16 03:10:59.660245 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-16 03:10:59.660252 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-16 03:10:59.660259 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-16 03:10:59.660266 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-16 03:10:59.660273 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-16 03:10:59.660281 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-16 03:10:59.660288 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-16 03:10:59.660295 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-16 03:10:59.660302 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-16 03:10:59.660310 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-16 03:10:59.660317 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-16 03:10:59.660325 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-16 03:10:59.660332 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-16 03:10:59.660351 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-16 03:10:59.660359 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-16 03:10:59.660366 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-16 03:10:59.660373 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-16 03:10:59.660381 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-16 03:10:59.660388 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-16 03:10:59.660395 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-16 03:10:59.660402 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-16 03:10:59.660410 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-16 03:10:59.660417 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-16 03:10:59.660424 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-16 03:10:59.660431 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-16 03:10:59.660445 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-16 03:10:59.660452 | orchestrator | 2026-02-16 03:10:59.660460 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-16 03:10:59.660467 | orchestrator | Monday 16 February 2026 03:10:58 +0000 (0:00:07.068) 0:02:39.310 ******* 2026-02-16 03:10:59.660474 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:10:59.660482 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:10:59.660489 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:10:59.660496 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:10:59.660503 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:10:59.660510 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:10:59.660518 | orchestrator | 2026-02-16 03:10:59.660525 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-16 03:10:59.660532 | orchestrator | Monday 16 February 2026 03:10:59 +0000 (0:00:00.444) 0:02:39.755 ******* 2026-02-16 03:10:59.660539 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:10:59.660547 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:10:59.660554 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:10:59.660561 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:10:59.660625 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:10:59.660633 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:10:59.660640 | orchestrator | 2026-02-16 03:10:59.660647 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:10:59.660655 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 03:10:59.660669 | orchestrator | testbed-node-0 : ok=53  changed=12  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-16 03:10:59.660677 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-16 03:10:59.660684 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-16 03:10:59.660692 | orchestrator | testbed-node-3 : ok=16  changed=5  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-16 03:10:59.660699 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-16 03:10:59.660706 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-16 03:10:59.660714 | orchestrator | 2026-02-16 03:10:59.660721 | orchestrator | 2026-02-16 03:10:59.660728 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:10:59.660736 | orchestrator | Monday 16 February 2026 03:10:59 +0000 (0:00:00.555) 0:02:40.310 ******* 2026-02-16 03:10:59.660743 | orchestrator | =============================================================================== 2026-02-16 03:10:59.660750 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.44s 2026-02-16 03:10:59.660757 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 22.15s 2026-02-16 03:10:59.660765 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 14.99s 2026-02-16 03:10:59.660772 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.40s 2026-02-16 03:10:59.660779 | orchestrator | Manage labels ----------------------------------------------------------- 7.07s 2026-02-16 03:10:59.660786 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 4.00s 2026-02-16 03:10:59.660794 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 3.27s 2026-02-16 03:10:59.660806 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.11s 2026-02-16 03:10:59.660819 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 2.09s 2026-02-16 03:10:59.858790 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.75s 2026-02-16 03:10:59.858885 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.74s 2026-02-16 03:10:59.858897 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.68s 2026-02-16 03:10:59.858907 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.52s 2026-02-16 03:10:59.858917 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.37s 2026-02-16 03:10:59.858927 | orchestrator | k3s_agent : Create custom resolv.conf for k3s --------------------------- 1.29s 2026-02-16 03:10:59.858936 | orchestrator | kubectl : Install required packages ------------------------------------- 1.28s 2026-02-16 03:10:59.858946 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.24s 2026-02-16 03:10:59.858955 | orchestrator | k3s_server_post : Check Cilium version ---------------------------------- 1.20s 2026-02-16 03:10:59.858965 | orchestrator | k3s_server_post : Set _cilium_bgp_neighbors fact ------------------------ 1.18s 2026-02-16 03:10:59.858975 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.16s 2026-02-16 03:11:00.038712 | orchestrator | + osism apply copy-kubeconfig 2026-02-16 03:11:11.843677 | orchestrator | 2026-02-16 03:11:11 | INFO  | Task 4c781736-d04e-4fe2-8e30-8503a9efaeee (copy-kubeconfig) was prepared for execution. 2026-02-16 03:11:11.843794 | orchestrator | 2026-02-16 03:11:11 | INFO  | It takes a moment until task 4c781736-d04e-4fe2-8e30-8503a9efaeee (copy-kubeconfig) has been started and output is visible here. 2026-02-16 03:11:18.195139 | orchestrator | 2026-02-16 03:11:18.195252 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-16 03:11:18.195268 | orchestrator | 2026-02-16 03:11:18.195281 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-16 03:11:18.195293 | orchestrator | Monday 16 February 2026 03:11:15 +0000 (0:00:00.114) 0:00:00.114 ******* 2026-02-16 03:11:18.195305 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-16 03:11:18.195316 | orchestrator | 2026-02-16 03:11:18.195327 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-16 03:11:18.195338 | orchestrator | Monday 16 February 2026 03:11:16 +0000 (0:00:00.678) 0:00:00.793 ******* 2026-02-16 03:11:18.195349 | orchestrator | changed: [testbed-manager] 2026-02-16 03:11:18.195360 | orchestrator | 2026-02-16 03:11:18.195371 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-16 03:11:18.195382 | orchestrator | Monday 16 February 2026 03:11:17 +0000 (0:00:01.025) 0:00:01.819 ******* 2026-02-16 03:11:18.195394 | orchestrator | changed: [testbed-manager] 2026-02-16 03:11:18.195405 | orchestrator | 2026-02-16 03:11:18.195416 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:11:18.195427 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 03:11:18.195439 | orchestrator | 2026-02-16 03:11:18.195450 | orchestrator | 2026-02-16 03:11:18.195462 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:11:18.195473 | orchestrator | Monday 16 February 2026 03:11:18 +0000 (0:00:00.397) 0:00:02.217 ******* 2026-02-16 03:11:18.195483 | orchestrator | =============================================================================== 2026-02-16 03:11:18.195494 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.03s 2026-02-16 03:11:18.195505 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.68s 2026-02-16 03:11:18.195516 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.40s 2026-02-16 03:11:18.402982 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-02-16 03:11:30.466893 | orchestrator | 2026-02-16 03:11:30 | INFO  | Task fb6c2ea0-0032-4193-824b-39631047b6dc (openstackclient) was prepared for execution. 2026-02-16 03:11:30.467008 | orchestrator | 2026-02-16 03:11:30 | INFO  | It takes a moment until task fb6c2ea0-0032-4193-824b-39631047b6dc (openstackclient) has been started and output is visible here. 2026-02-16 03:12:16.062600 | orchestrator | 2026-02-16 03:12:16.062777 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-16 03:12:16.062799 | orchestrator | 2026-02-16 03:12:16.062812 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-16 03:12:16.062824 | orchestrator | Monday 16 February 2026 03:11:34 +0000 (0:00:00.218) 0:00:00.218 ******* 2026-02-16 03:12:16.062837 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-16 03:12:16.062849 | orchestrator | 2026-02-16 03:12:16.062861 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-16 03:12:16.062872 | orchestrator | Monday 16 February 2026 03:11:34 +0000 (0:00:00.208) 0:00:00.426 ******* 2026-02-16 03:12:16.062883 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-16 03:12:16.062895 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-16 03:12:16.062907 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-16 03:12:16.062918 | orchestrator | 2026-02-16 03:12:16.062929 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-16 03:12:16.062940 | orchestrator | Monday 16 February 2026 03:11:36 +0000 (0:00:01.226) 0:00:01.653 ******* 2026-02-16 03:12:16.062951 | orchestrator | changed: [testbed-manager] 2026-02-16 03:12:16.062962 | orchestrator | 2026-02-16 03:12:16.062973 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-16 03:12:16.062984 | orchestrator | Monday 16 February 2026 03:11:37 +0000 (0:00:01.359) 0:00:03.012 ******* 2026-02-16 03:12:16.062994 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-16 03:12:16.063006 | orchestrator | ok: [testbed-manager] 2026-02-16 03:12:16.063018 | orchestrator | 2026-02-16 03:12:16.063029 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-16 03:12:16.063040 | orchestrator | Monday 16 February 2026 03:12:11 +0000 (0:00:33.679) 0:00:36.692 ******* 2026-02-16 03:12:16.063051 | orchestrator | changed: [testbed-manager] 2026-02-16 03:12:16.063062 | orchestrator | 2026-02-16 03:12:16.063073 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-16 03:12:16.063084 | orchestrator | Monday 16 February 2026 03:12:12 +0000 (0:00:00.904) 0:00:37.596 ******* 2026-02-16 03:12:16.063095 | orchestrator | ok: [testbed-manager] 2026-02-16 03:12:16.063106 | orchestrator | 2026-02-16 03:12:16.063117 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-16 03:12:16.063130 | orchestrator | Monday 16 February 2026 03:12:12 +0000 (0:00:00.623) 0:00:38.219 ******* 2026-02-16 03:12:16.063143 | orchestrator | changed: [testbed-manager] 2026-02-16 03:12:16.063156 | orchestrator | 2026-02-16 03:12:16.063169 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-16 03:12:16.063182 | orchestrator | Monday 16 February 2026 03:12:14 +0000 (0:00:01.420) 0:00:39.640 ******* 2026-02-16 03:12:16.063195 | orchestrator | changed: [testbed-manager] 2026-02-16 03:12:16.063208 | orchestrator | 2026-02-16 03:12:16.063243 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-16 03:12:16.063257 | orchestrator | Monday 16 February 2026 03:12:14 +0000 (0:00:00.663) 0:00:40.304 ******* 2026-02-16 03:12:16.063269 | orchestrator | changed: [testbed-manager] 2026-02-16 03:12:16.063282 | orchestrator | 2026-02-16 03:12:16.063295 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-16 03:12:16.063330 | orchestrator | Monday 16 February 2026 03:12:15 +0000 (0:00:00.574) 0:00:40.879 ******* 2026-02-16 03:12:16.063343 | orchestrator | ok: [testbed-manager] 2026-02-16 03:12:16.063355 | orchestrator | 2026-02-16 03:12:16.063368 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:12:16.063380 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 03:12:16.063394 | orchestrator | 2026-02-16 03:12:16.063407 | orchestrator | 2026-02-16 03:12:16.063442 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:12:16.063455 | orchestrator | Monday 16 February 2026 03:12:15 +0000 (0:00:00.402) 0:00:41.281 ******* 2026-02-16 03:12:16.063468 | orchestrator | =============================================================================== 2026-02-16 03:12:16.063480 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.68s 2026-02-16 03:12:16.063491 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.42s 2026-02-16 03:12:16.063502 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.36s 2026-02-16 03:12:16.063518 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.23s 2026-02-16 03:12:16.063529 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.90s 2026-02-16 03:12:16.063540 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.66s 2026-02-16 03:12:16.063551 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.62s 2026-02-16 03:12:16.063561 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.57s 2026-02-16 03:12:16.063572 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.40s 2026-02-16 03:12:16.063583 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.21s 2026-02-16 03:12:18.247119 | orchestrator | 2026-02-16 03:12:18 | INFO  | Task 541d690a-aa3f-4d0e-a90b-6775d9423190 (common) was prepared for execution. 2026-02-16 03:12:18.247222 | orchestrator | 2026-02-16 03:12:18 | INFO  | It takes a moment until task 541d690a-aa3f-4d0e-a90b-6775d9423190 (common) has been started and output is visible here. 2026-02-16 03:12:30.084375 | orchestrator | 2026-02-16 03:12:30.084493 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-16 03:12:30.084510 | orchestrator | 2026-02-16 03:12:30.084524 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-16 03:12:30.084537 | orchestrator | Monday 16 February 2026 03:12:22 +0000 (0:00:00.267) 0:00:00.267 ******* 2026-02-16 03:12:30.084550 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:12:30.084563 | orchestrator | 2026-02-16 03:12:30.084575 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-16 03:12:30.084587 | orchestrator | Monday 16 February 2026 03:12:23 +0000 (0:00:01.262) 0:00:01.530 ******* 2026-02-16 03:12:30.084598 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 03:12:30.084610 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 03:12:30.084622 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 03:12:30.084633 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 03:12:30.084645 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 03:12:30.084656 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 03:12:30.084668 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 03:12:30.084679 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 03:12:30.084760 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 03:12:30.084783 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 03:12:30.084802 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 03:12:30.084821 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 03:12:30.084832 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 03:12:30.084843 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 03:12:30.084853 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 03:12:30.084864 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 03:12:30.084875 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 03:12:30.084886 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 03:12:30.084897 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 03:12:30.084907 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 03:12:30.084918 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 03:12:30.084929 | orchestrator | 2026-02-16 03:12:30.084940 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-16 03:12:30.084950 | orchestrator | Monday 16 February 2026 03:12:26 +0000 (0:00:02.543) 0:00:04.073 ******* 2026-02-16 03:12:30.084962 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:12:30.084973 | orchestrator | 2026-02-16 03:12:30.084984 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-16 03:12:30.084995 | orchestrator | Monday 16 February 2026 03:12:27 +0000 (0:00:01.274) 0:00:05.347 ******* 2026-02-16 03:12:30.085025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:30.085046 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:30.085077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:30.085090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:30.085112 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:30.085131 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:30.085150 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:30.085168 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:30.085188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:30.085218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:31.167466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:31.167565 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:31.167820 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:31.167843 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:31.167856 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:31.167878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:31.167890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:31.167920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:31.167954 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:31.167966 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:31.167978 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:31.167990 | orchestrator | 2026-02-16 03:12:31.168003 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-16 03:12:31.168015 | orchestrator | Monday 16 February 2026 03:12:30 +0000 (0:00:03.491) 0:00:08.839 ******* 2026-02-16 03:12:31.168029 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 03:12:31.168047 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:31.168059 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:31.168071 | orchestrator | skipping: [testbed-manager] 2026-02-16 03:12:31.168083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 03:12:31.168117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:31.746111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:31.746194 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:12:31.746205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 03:12:31.746213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:31.746220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:31.746226 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:12:31.746247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 03:12:31.746253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:31.746278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:31.746284 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:12:31.746303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 03:12:31.746310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:31.746316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:31.746322 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:12:31.746328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 03:12:31.746335 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:31.746341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:31.746351 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:12:31.746387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 03:12:31.746397 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:32.518581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:32.518748 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:12:32.518771 | orchestrator | 2026-02-16 03:12:32.518829 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-16 03:12:32.518844 | orchestrator | Monday 16 February 2026 03:12:31 +0000 (0:00:00.860) 0:00:09.700 ******* 2026-02-16 03:12:32.518858 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 03:12:32.518873 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:32.518901 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:32.518914 | orchestrator | skipping: [testbed-manager] 2026-02-16 03:12:32.518947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 03:12:32.518959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:32.518971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:32.518983 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:12:32.519023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 03:12:32.519036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:32.519048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:32.519060 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:12:32.519071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 03:12:32.519095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:32.519109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:32.519122 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:12:32.519134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 03:12:32.519167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:37.187293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:37.187406 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:12:37.187427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 03:12:37.187442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:37.187490 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:37.187504 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:12:37.187515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 03:12:37.187527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:37.187538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:37.187550 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:12:37.187561 | orchestrator | 2026-02-16 03:12:37.187573 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-16 03:12:37.187585 | orchestrator | Monday 16 February 2026 03:12:33 +0000 (0:00:01.622) 0:00:11.322 ******* 2026-02-16 03:12:37.187596 | orchestrator | skipping: [testbed-manager] 2026-02-16 03:12:37.187606 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:12:37.187617 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:12:37.187628 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:12:37.187656 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:12:37.187667 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:12:37.187678 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:12:37.187689 | orchestrator | 2026-02-16 03:12:37.187700 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-16 03:12:37.187757 | orchestrator | Monday 16 February 2026 03:12:34 +0000 (0:00:00.663) 0:00:11.986 ******* 2026-02-16 03:12:37.187768 | orchestrator | skipping: [testbed-manager] 2026-02-16 03:12:37.187779 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:12:37.187790 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:12:37.187800 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:12:37.187811 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:12:37.187822 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:12:37.187835 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:12:37.187847 | orchestrator | 2026-02-16 03:12:37.187860 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-16 03:12:37.187873 | orchestrator | Monday 16 February 2026 03:12:34 +0000 (0:00:00.781) 0:00:12.768 ******* 2026-02-16 03:12:37.187887 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:37.187909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:37.187923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:37.187944 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:37.187956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:37.187967 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:37.187994 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:39.922270 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:39.922430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:39.922462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:39.922475 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:39.922487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:39.922499 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:39.922531 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:39.922553 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:39.922565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:39.922577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:39.922593 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:39.922604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:39.922616 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:39.922627 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:39.922639 | orchestrator | 2026-02-16 03:12:39.922652 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-16 03:12:39.922663 | orchestrator | Monday 16 February 2026 03:12:38 +0000 (0:00:03.319) 0:00:16.087 ******* 2026-02-16 03:12:39.922675 | orchestrator | [WARNING]: Skipped 2026-02-16 03:12:39.922686 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-16 03:12:39.922699 | orchestrator | to this access issue: 2026-02-16 03:12:39.922737 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-16 03:12:39.922756 | orchestrator | directory 2026-02-16 03:12:39.922769 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-16 03:12:39.922782 | orchestrator | 2026-02-16 03:12:39.922793 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-16 03:12:39.922804 | orchestrator | Monday 16 February 2026 03:12:39 +0000 (0:00:00.915) 0:00:17.003 ******* 2026-02-16 03:12:39.922814 | orchestrator | [WARNING]: Skipped 2026-02-16 03:12:39.922832 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-16 03:12:49.076357 | orchestrator | to this access issue: 2026-02-16 03:12:49.076491 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-16 03:12:49.076512 | orchestrator | directory 2026-02-16 03:12:49.076521 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-16 03:12:49.076552 | orchestrator | 2026-02-16 03:12:49.076596 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-16 03:12:49.076606 | orchestrator | Monday 16 February 2026 03:12:40 +0000 (0:00:01.107) 0:00:18.111 ******* 2026-02-16 03:12:49.076613 | orchestrator | [WARNING]: Skipped 2026-02-16 03:12:49.076620 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-16 03:12:49.076628 | orchestrator | to this access issue: 2026-02-16 03:12:49.076635 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-16 03:12:49.076641 | orchestrator | directory 2026-02-16 03:12:49.076648 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-16 03:12:49.076654 | orchestrator | 2026-02-16 03:12:49.076665 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-16 03:12:49.076672 | orchestrator | Monday 16 February 2026 03:12:40 +0000 (0:00:00.782) 0:00:18.893 ******* 2026-02-16 03:12:49.076678 | orchestrator | [WARNING]: Skipped 2026-02-16 03:12:49.076685 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-16 03:12:49.076691 | orchestrator | to this access issue: 2026-02-16 03:12:49.076697 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-16 03:12:49.076704 | orchestrator | directory 2026-02-16 03:12:49.076710 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-16 03:12:49.076716 | orchestrator | 2026-02-16 03:12:49.076723 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-16 03:12:49.076769 | orchestrator | Monday 16 February 2026 03:12:41 +0000 (0:00:00.790) 0:00:19.684 ******* 2026-02-16 03:12:49.076775 | orchestrator | changed: [testbed-manager] 2026-02-16 03:12:49.076782 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:12:49.076788 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:12:49.076794 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:12:49.076801 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:12:49.076807 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:12:49.076813 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:12:49.076819 | orchestrator | 2026-02-16 03:12:49.076837 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-16 03:12:49.076844 | orchestrator | Monday 16 February 2026 03:12:44 +0000 (0:00:02.419) 0:00:22.104 ******* 2026-02-16 03:12:49.076851 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 03:12:49.076858 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 03:12:49.076865 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 03:12:49.076871 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 03:12:49.076877 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 03:12:49.076883 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 03:12:49.076909 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 03:12:49.076916 | orchestrator | 2026-02-16 03:12:49.076924 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-16 03:12:49.076931 | orchestrator | Monday 16 February 2026 03:12:46 +0000 (0:00:02.075) 0:00:24.179 ******* 2026-02-16 03:12:49.076938 | orchestrator | changed: [testbed-manager] 2026-02-16 03:12:49.076946 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:12:49.076953 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:12:49.076961 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:12:49.076968 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:12:49.076975 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:12:49.076982 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:12:49.076988 | orchestrator | 2026-02-16 03:12:49.076996 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-16 03:12:49.077003 | orchestrator | Monday 16 February 2026 03:12:47 +0000 (0:00:01.784) 0:00:25.963 ******* 2026-02-16 03:12:49.077013 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:49.077039 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:49.077047 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:49.077055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:49.077063 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:49.077076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:49.077083 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:49.077098 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:49.077112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:49.077124 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:54.839647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:54.839825 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:54.839883 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:54.839898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:54.839910 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:54.839921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:12:54.839933 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:54.839974 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:54.839987 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:54.839998 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:54.840024 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:54.840036 | orchestrator | 2026-02-16 03:12:54.840049 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-16 03:12:54.840062 | orchestrator | Monday 16 February 2026 03:12:49 +0000 (0:00:01.439) 0:00:27.403 ******* 2026-02-16 03:12:54.840073 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 03:12:54.840084 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 03:12:54.840095 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 03:12:54.840106 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 03:12:54.840117 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 03:12:54.840127 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 03:12:54.840138 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 03:12:54.840149 | orchestrator | 2026-02-16 03:12:54.840160 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-16 03:12:54.840171 | orchestrator | Monday 16 February 2026 03:12:51 +0000 (0:00:01.808) 0:00:29.211 ******* 2026-02-16 03:12:54.840183 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 03:12:54.840197 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 03:12:54.840209 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 03:12:54.840222 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 03:12:54.840234 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 03:12:54.840246 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 03:12:54.840258 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 03:12:54.840271 | orchestrator | 2026-02-16 03:12:54.840284 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-16 03:12:54.840296 | orchestrator | Monday 16 February 2026 03:12:52 +0000 (0:00:01.644) 0:00:30.855 ******* 2026-02-16 03:12:54.840309 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:54.840332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:55.474229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:55.474311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:55.474317 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:55.474322 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:55.474326 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:55.474330 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 03:12:55.474334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:55.474363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:55.474370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:55.474375 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:55.474379 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:55.474384 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:55.474390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:55.474394 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:12:55.474406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:14:14.840466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:14:14.840631 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:14:14.840665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:14:14.840718 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:14:14.840742 | orchestrator | 2026-02-16 03:14:14.840765 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-16 03:14:14.840785 | orchestrator | Monday 16 February 2026 03:12:55 +0000 (0:00:02.570) 0:00:33.425 ******* 2026-02-16 03:14:14.840819 | orchestrator | changed: [testbed-manager] 2026-02-16 03:14:14.840840 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:14:14.840936 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:14:14.840962 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:14:14.840982 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:14:14.841001 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:14:14.841018 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:14:14.841033 | orchestrator | 2026-02-16 03:14:14.841046 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-16 03:14:14.841060 | orchestrator | Monday 16 February 2026 03:12:56 +0000 (0:00:01.309) 0:00:34.734 ******* 2026-02-16 03:14:14.841073 | orchestrator | changed: [testbed-manager] 2026-02-16 03:14:14.841085 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:14:14.841097 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:14:14.841108 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:14:14.841119 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:14:14.841149 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:14:14.841161 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:14:14.841172 | orchestrator | 2026-02-16 03:14:14.841231 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 03:14:14.841244 | orchestrator | Monday 16 February 2026 03:12:57 +0000 (0:00:01.033) 0:00:35.768 ******* 2026-02-16 03:14:14.841255 | orchestrator | 2026-02-16 03:14:14.841266 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 03:14:14.841277 | orchestrator | Monday 16 February 2026 03:12:57 +0000 (0:00:00.063) 0:00:35.832 ******* 2026-02-16 03:14:14.841288 | orchestrator | 2026-02-16 03:14:14.841299 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 03:14:14.841310 | orchestrator | Monday 16 February 2026 03:12:57 +0000 (0:00:00.063) 0:00:35.895 ******* 2026-02-16 03:14:14.841321 | orchestrator | 2026-02-16 03:14:14.841331 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 03:14:14.841342 | orchestrator | Monday 16 February 2026 03:12:57 +0000 (0:00:00.061) 0:00:35.956 ******* 2026-02-16 03:14:14.841353 | orchestrator | 2026-02-16 03:14:14.841363 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 03:14:14.841374 | orchestrator | Monday 16 February 2026 03:12:58 +0000 (0:00:00.261) 0:00:36.218 ******* 2026-02-16 03:14:14.841385 | orchestrator | 2026-02-16 03:14:14.841396 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 03:14:14.841406 | orchestrator | Monday 16 February 2026 03:12:58 +0000 (0:00:00.081) 0:00:36.299 ******* 2026-02-16 03:14:14.841417 | orchestrator | 2026-02-16 03:14:14.841428 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 03:14:14.841438 | orchestrator | Monday 16 February 2026 03:12:58 +0000 (0:00:00.061) 0:00:36.361 ******* 2026-02-16 03:14:14.841449 | orchestrator | 2026-02-16 03:14:14.841461 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-16 03:14:14.841472 | orchestrator | Monday 16 February 2026 03:12:58 +0000 (0:00:00.089) 0:00:36.451 ******* 2026-02-16 03:14:14.841482 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:14:14.841493 | orchestrator | changed: [testbed-manager] 2026-02-16 03:14:14.841504 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:14:14.841568 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:14:14.841579 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:14:14.841613 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:14:14.841625 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:14:14.841636 | orchestrator | 2026-02-16 03:14:14.841647 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-16 03:14:14.841658 | orchestrator | Monday 16 February 2026 03:13:33 +0000 (0:00:35.428) 0:01:11.879 ******* 2026-02-16 03:14:14.841668 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:14:14.841679 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:14:14.841690 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:14:14.841701 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:14:14.841711 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:14:14.841722 | orchestrator | changed: [testbed-manager] 2026-02-16 03:14:14.841733 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:14:14.841743 | orchestrator | 2026-02-16 03:14:14.841754 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-16 03:14:14.841765 | orchestrator | Monday 16 February 2026 03:14:04 +0000 (0:00:30.305) 0:01:42.184 ******* 2026-02-16 03:14:14.841776 | orchestrator | ok: [testbed-manager] 2026-02-16 03:14:14.841795 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:14:14.841806 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:14:14.841817 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:14:14.841827 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:14:14.841838 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:14:14.841849 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:14:14.841890 | orchestrator | 2026-02-16 03:14:14.841902 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-16 03:14:14.841913 | orchestrator | Monday 16 February 2026 03:14:06 +0000 (0:00:01.901) 0:01:44.086 ******* 2026-02-16 03:14:14.841924 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:14:14.841943 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:14:14.841954 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:14:14.841965 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:14:14.841976 | orchestrator | changed: [testbed-manager] 2026-02-16 03:14:14.841987 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:14:14.841997 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:14:14.842008 | orchestrator | 2026-02-16 03:14:14.842070 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:14:14.842083 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-16 03:14:14.842104 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-16 03:14:14.842132 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-16 03:14:14.842154 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-16 03:14:14.842172 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-16 03:14:14.842191 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-16 03:14:14.842210 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-16 03:14:14.842230 | orchestrator | 2026-02-16 03:14:14.842251 | orchestrator | 2026-02-16 03:14:14.842270 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:14:14.842290 | orchestrator | Monday 16 February 2026 03:14:14 +0000 (0:00:08.685) 0:01:52.772 ******* 2026-02-16 03:14:14.842310 | orchestrator | =============================================================================== 2026-02-16 03:14:14.842329 | orchestrator | common : Restart fluentd container ------------------------------------- 35.43s 2026-02-16 03:14:14.842340 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 30.31s 2026-02-16 03:14:14.842351 | orchestrator | common : Restart cron container ----------------------------------------- 8.69s 2026-02-16 03:14:14.842361 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.49s 2026-02-16 03:14:14.842372 | orchestrator | common : Copying over config.json files for services -------------------- 3.32s 2026-02-16 03:14:14.842383 | orchestrator | common : Check common containers ---------------------------------------- 2.57s 2026-02-16 03:14:14.842393 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.54s 2026-02-16 03:14:14.842404 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.42s 2026-02-16 03:14:14.842415 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.08s 2026-02-16 03:14:14.842425 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.90s 2026-02-16 03:14:14.842436 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.81s 2026-02-16 03:14:14.842447 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.78s 2026-02-16 03:14:14.842458 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.64s 2026-02-16 03:14:14.842468 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.62s 2026-02-16 03:14:14.842479 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.44s 2026-02-16 03:14:14.842490 | orchestrator | common : Creating log volume -------------------------------------------- 1.31s 2026-02-16 03:14:14.842513 | orchestrator | common : include_tasks -------------------------------------------------- 1.27s 2026-02-16 03:14:15.220388 | orchestrator | common : include_tasks -------------------------------------------------- 1.26s 2026-02-16 03:14:15.220492 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.11s 2026-02-16 03:14:15.220509 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.03s 2026-02-16 03:14:17.452142 | orchestrator | 2026-02-16 03:14:17 | INFO  | Task 3570d5b7-11bb-460e-b118-2f2ab0dc2cc1 (loadbalancer) was prepared for execution. 2026-02-16 03:14:17.452397 | orchestrator | 2026-02-16 03:14:17 | INFO  | It takes a moment until task 3570d5b7-11bb-460e-b118-2f2ab0dc2cc1 (loadbalancer) has been started and output is visible here. 2026-02-16 03:14:30.847009 | orchestrator | 2026-02-16 03:14:30.847147 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 03:14:30.847166 | orchestrator | 2026-02-16 03:14:30.847194 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 03:14:30.847206 | orchestrator | Monday 16 February 2026 03:14:21 +0000 (0:00:00.253) 0:00:00.253 ******* 2026-02-16 03:14:30.847218 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:14:30.847229 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:14:30.847240 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:14:30.847251 | orchestrator | 2026-02-16 03:14:30.847263 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 03:14:30.847274 | orchestrator | Monday 16 February 2026 03:14:21 +0000 (0:00:00.283) 0:00:00.536 ******* 2026-02-16 03:14:30.847286 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-16 03:14:30.847297 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-16 03:14:30.847308 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-16 03:14:30.847318 | orchestrator | 2026-02-16 03:14:30.847329 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-16 03:14:30.847339 | orchestrator | 2026-02-16 03:14:30.847350 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-16 03:14:30.847361 | orchestrator | Monday 16 February 2026 03:14:22 +0000 (0:00:00.406) 0:00:00.942 ******* 2026-02-16 03:14:30.847372 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:14:30.847383 | orchestrator | 2026-02-16 03:14:30.847395 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-16 03:14:30.847405 | orchestrator | Monday 16 February 2026 03:14:22 +0000 (0:00:00.518) 0:00:01.461 ******* 2026-02-16 03:14:30.847417 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:14:30.847436 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:14:30.847455 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:14:30.847473 | orchestrator | 2026-02-16 03:14:30.847492 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-16 03:14:30.847511 | orchestrator | Monday 16 February 2026 03:14:23 +0000 (0:00:00.606) 0:00:02.068 ******* 2026-02-16 03:14:30.847530 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:14:30.847551 | orchestrator | 2026-02-16 03:14:30.847570 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-16 03:14:30.847590 | orchestrator | Monday 16 February 2026 03:14:24 +0000 (0:00:00.691) 0:00:02.759 ******* 2026-02-16 03:14:30.847603 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:14:30.847615 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:14:30.847628 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:14:30.847640 | orchestrator | 2026-02-16 03:14:30.847652 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-16 03:14:30.847664 | orchestrator | Monday 16 February 2026 03:14:24 +0000 (0:00:00.603) 0:00:03.363 ******* 2026-02-16 03:14:30.847677 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-16 03:14:30.847689 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-16 03:14:30.847725 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-16 03:14:30.847738 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-16 03:14:30.847750 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-16 03:14:30.847762 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-16 03:14:30.847774 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-16 03:14:30.847787 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-16 03:14:30.847798 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-16 03:14:30.847809 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-16 03:14:30.847819 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-16 03:14:30.847830 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-16 03:14:30.847840 | orchestrator | 2026-02-16 03:14:30.847851 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-16 03:14:30.847862 | orchestrator | Monday 16 February 2026 03:14:26 +0000 (0:00:02.057) 0:00:05.420 ******* 2026-02-16 03:14:30.847873 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-16 03:14:30.847913 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-16 03:14:30.847925 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-16 03:14:30.847935 | orchestrator | 2026-02-16 03:14:30.847946 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-16 03:14:30.847957 | orchestrator | Monday 16 February 2026 03:14:27 +0000 (0:00:00.690) 0:00:06.111 ******* 2026-02-16 03:14:30.847967 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-16 03:14:30.847978 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-16 03:14:30.847989 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-16 03:14:30.848000 | orchestrator | 2026-02-16 03:14:30.848010 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-16 03:14:30.848021 | orchestrator | Monday 16 February 2026 03:14:28 +0000 (0:00:01.241) 0:00:07.352 ******* 2026-02-16 03:14:30.848032 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-16 03:14:30.848043 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:14:30.848081 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-16 03:14:30.848100 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:14:30.848127 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-16 03:14:30.848146 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:14:30.848164 | orchestrator | 2026-02-16 03:14:30.848181 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-16 03:14:30.848198 | orchestrator | Monday 16 February 2026 03:14:29 +0000 (0:00:00.488) 0:00:07.841 ******* 2026-02-16 03:14:30.848219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-16 03:14:30.848246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-16 03:14:30.848279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-16 03:14:30.848297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 03:14:30.848315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 03:14:30.848369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 03:14:35.749637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 03:14:35.749742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 03:14:35.749786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 03:14:35.749800 | orchestrator | 2026-02-16 03:14:35.749813 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-16 03:14:35.749826 | orchestrator | Monday 16 February 2026 03:14:30 +0000 (0:00:01.741) 0:00:09.582 ******* 2026-02-16 03:14:35.749838 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:14:35.749850 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:14:35.749860 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:14:35.749872 | orchestrator | 2026-02-16 03:14:35.749883 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-16 03:14:35.749959 | orchestrator | Monday 16 February 2026 03:14:31 +0000 (0:00:00.853) 0:00:10.435 ******* 2026-02-16 03:14:35.749972 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-16 03:14:35.749984 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-16 03:14:35.749994 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-16 03:14:35.750005 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-16 03:14:35.750074 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-16 03:14:35.750087 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-16 03:14:35.750097 | orchestrator | 2026-02-16 03:14:35.750108 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-16 03:14:35.750119 | orchestrator | Monday 16 February 2026 03:14:33 +0000 (0:00:01.422) 0:00:11.858 ******* 2026-02-16 03:14:35.750130 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:14:35.750141 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:14:35.750153 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:14:35.750166 | orchestrator | 2026-02-16 03:14:35.750189 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-16 03:14:35.750201 | orchestrator | Monday 16 February 2026 03:14:33 +0000 (0:00:00.820) 0:00:12.679 ******* 2026-02-16 03:14:35.750213 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:14:35.750225 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:14:35.750237 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:14:35.750250 | orchestrator | 2026-02-16 03:14:35.750262 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-16 03:14:35.750275 | orchestrator | Monday 16 February 2026 03:14:35 +0000 (0:00:01.241) 0:00:13.920 ******* 2026-02-16 03:14:35.750288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-16 03:14:35.750322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:14:35.750374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:14:35.750389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__84d915a4e60bd525b6e5cf8b3e3ca7ab03678230', '__omit_place_holder__84d915a4e60bd525b6e5cf8b3e3ca7ab03678230'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-16 03:14:35.750401 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:14:35.750412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-16 03:14:35.750424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:14:35.750436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:14:35.750457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__84d915a4e60bd525b6e5cf8b3e3ca7ab03678230', '__omit_place_holder__84d915a4e60bd525b6e5cf8b3e3ca7ab03678230'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-16 03:14:35.750475 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:14:35.750500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-16 03:14:38.667601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:14:38.667714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:14:38.667731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__84d915a4e60bd525b6e5cf8b3e3ca7ab03678230', '__omit_place_holder__84d915a4e60bd525b6e5cf8b3e3ca7ab03678230'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-16 03:14:38.667744 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:14:38.667758 | orchestrator | 2026-02-16 03:14:38.667770 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-16 03:14:38.667783 | orchestrator | Monday 16 February 2026 03:14:35 +0000 (0:00:00.567) 0:00:14.488 ******* 2026-02-16 03:14:38.667795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-16 03:14:38.667807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-16 03:14:38.667856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-16 03:14:38.667937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 03:14:38.667953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:14:38.667965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 03:14:38.667977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__84d915a4e60bd525b6e5cf8b3e3ca7ab03678230', '__omit_place_holder__84d915a4e60bd525b6e5cf8b3e3ca7ab03678230'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-16 03:14:38.667988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:14:38.668013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__84d915a4e60bd525b6e5cf8b3e3ca7ab03678230', '__omit_place_holder__84d915a4e60bd525b6e5cf8b3e3ca7ab03678230'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-16 03:14:38.668045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 03:14:46.955280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:14:46.955384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__84d915a4e60bd525b6e5cf8b3e3ca7ab03678230', '__omit_place_holder__84d915a4e60bd525b6e5cf8b3e3ca7ab03678230'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-16 03:14:46.955397 | orchestrator | 2026-02-16 03:14:46.955408 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-16 03:14:46.955418 | orchestrator | Monday 16 February 2026 03:14:38 +0000 (0:00:02.916) 0:00:17.404 ******* 2026-02-16 03:14:46.955426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-16 03:14:46.955436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-16 03:14:46.955475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-16 03:14:46.955485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 03:14:46.955508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 03:14:46.955518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 03:14:46.955526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 03:14:46.955535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 03:14:46.955549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 03:14:46.955557 | orchestrator | 2026-02-16 03:14:46.955566 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-16 03:14:46.955574 | orchestrator | Monday 16 February 2026 03:14:41 +0000 (0:00:03.300) 0:00:20.705 ******* 2026-02-16 03:14:46.955583 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-16 03:14:46.955592 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-16 03:14:46.955604 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-16 03:14:46.955612 | orchestrator | 2026-02-16 03:14:46.955620 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-16 03:14:46.955628 | orchestrator | Monday 16 February 2026 03:14:43 +0000 (0:00:01.803) 0:00:22.509 ******* 2026-02-16 03:14:46.955636 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-16 03:14:46.955644 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-16 03:14:46.955652 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-16 03:14:46.955660 | orchestrator | 2026-02-16 03:14:46.955668 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-16 03:14:46.955676 | orchestrator | Monday 16 February 2026 03:14:46 +0000 (0:00:02.683) 0:00:25.192 ******* 2026-02-16 03:14:46.955684 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:14:46.955693 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:14:46.955701 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:14:46.955709 | orchestrator | 2026-02-16 03:14:46.955722 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-16 03:14:57.964790 | orchestrator | Monday 16 February 2026 03:14:46 +0000 (0:00:00.506) 0:00:25.699 ******* 2026-02-16 03:14:57.964914 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-16 03:14:57.964988 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-16 03:14:57.965002 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-16 03:14:57.965014 | orchestrator | 2026-02-16 03:14:57.965028 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-16 03:14:57.965040 | orchestrator | Monday 16 February 2026 03:14:48 +0000 (0:00:01.963) 0:00:27.663 ******* 2026-02-16 03:14:57.965052 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-16 03:14:57.965063 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-16 03:14:57.965075 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-16 03:14:57.965109 | orchestrator | 2026-02-16 03:14:57.965121 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-16 03:14:57.965132 | orchestrator | Monday 16 February 2026 03:14:50 +0000 (0:00:01.975) 0:00:29.639 ******* 2026-02-16 03:14:57.965143 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-16 03:14:57.965155 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-16 03:14:57.965165 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-16 03:14:57.965176 | orchestrator | 2026-02-16 03:14:57.965187 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-16 03:14:57.965198 | orchestrator | Monday 16 February 2026 03:14:52 +0000 (0:00:01.367) 0:00:31.007 ******* 2026-02-16 03:14:57.965212 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-16 03:14:57.965232 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-16 03:14:57.965251 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-16 03:14:57.965269 | orchestrator | 2026-02-16 03:14:57.965288 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-16 03:14:57.965306 | orchestrator | Monday 16 February 2026 03:14:53 +0000 (0:00:01.381) 0:00:32.389 ******* 2026-02-16 03:14:57.965323 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:14:57.965343 | orchestrator | 2026-02-16 03:14:57.965360 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-16 03:14:57.965379 | orchestrator | Monday 16 February 2026 03:14:54 +0000 (0:00:00.511) 0:00:32.901 ******* 2026-02-16 03:14:57.965402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-16 03:14:57.965438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-16 03:14:57.965461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-16 03:14:57.965506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 03:14:57.965544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 03:14:57.965567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 03:14:57.965589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 03:14:57.965607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 03:14:57.965625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 03:14:57.965643 | orchestrator | 2026-02-16 03:14:57.965662 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-16 03:14:57.965679 | orchestrator | Monday 16 February 2026 03:14:57 +0000 (0:00:03.271) 0:00:36.172 ******* 2026-02-16 03:14:57.965722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-16 03:14:58.709688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:14:58.709814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:14:58.709844 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:14:58.709866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-16 03:14:58.709880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:14:58.709907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:14:58.709970 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:14:58.709992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-16 03:14:58.710101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:14:58.710118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:14:58.710130 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:14:58.710142 | orchestrator | 2026-02-16 03:14:58.710155 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-16 03:14:58.710167 | orchestrator | Monday 16 February 2026 03:14:57 +0000 (0:00:00.536) 0:00:36.708 ******* 2026-02-16 03:14:58.710179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-16 03:14:58.710191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:14:58.710202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:14:58.710213 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:14:58.710233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-16 03:14:58.710262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:14:59.471179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:14:59.471283 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:14:59.471303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-16 03:14:59.471325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:14:59.471347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:14:59.471366 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:14:59.471385 | orchestrator | 2026-02-16 03:14:59.471406 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-16 03:14:59.471426 | orchestrator | Monday 16 February 2026 03:14:58 +0000 (0:00:00.743) 0:00:37.452 ******* 2026-02-16 03:14:59.471466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-16 03:14:59.471510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:14:59.471544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:14:59.471556 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:14:59.471567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-16 03:14:59.471579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:14:59.471590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:14:59.471601 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:14:59.471618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-16 03:14:59.471657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:14:59.471669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:14:59.471701 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:00.799336 | orchestrator | 2026-02-16 03:15:00.799440 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-16 03:15:00.799456 | orchestrator | Monday 16 February 2026 03:14:59 +0000 (0:00:00.753) 0:00:38.205 ******* 2026-02-16 03:15:00.799473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-16 03:15:00.799488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:15:00.799500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:15:00.799512 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:00.799525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-16 03:15:00.799574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:15:00.799587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:15:00.799599 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:00.799628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-16 03:15:00.799640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:15:00.799652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:15:00.799663 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:00.799674 | orchestrator | 2026-02-16 03:15:00.799686 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-16 03:15:00.799697 | orchestrator | Monday 16 February 2026 03:14:59 +0000 (0:00:00.543) 0:00:38.748 ******* 2026-02-16 03:15:00.799709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-16 03:15:00.799734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:15:00.799747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:15:00.799758 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:00.799778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-16 03:15:01.782353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:15:01.782458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:15:01.782475 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:01.782488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-16 03:15:01.782522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:15:01.782533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:15:01.782543 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:01.782553 | orchestrator | 2026-02-16 03:15:01.782565 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-16 03:15:01.782576 | orchestrator | Monday 16 February 2026 03:15:00 +0000 (0:00:00.794) 0:00:39.543 ******* 2026-02-16 03:15:01.783024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-16 03:15:01.783066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:15:01.783080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:15:01.783093 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:01.783104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-16 03:15:01.783133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:15:01.783145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:15:01.783157 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:01.783169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-16 03:15:01.783188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:15:03.040086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:15:03.040200 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:03.040226 | orchestrator | 2026-02-16 03:15:03.040244 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-16 03:15:03.040262 | orchestrator | Monday 16 February 2026 03:15:01 +0000 (0:00:00.973) 0:00:40.517 ******* 2026-02-16 03:15:03.040279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-16 03:15:03.040339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:15:03.040358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:15:03.040374 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:03.040389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-16 03:15:03.040405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:15:03.040444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:15:03.040462 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:03.040478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-16 03:15:03.040506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:15:03.040530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:15:03.040547 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:03.040562 | orchestrator | 2026-02-16 03:15:03.040579 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-16 03:15:03.040596 | orchestrator | Monday 16 February 2026 03:15:02 +0000 (0:00:00.536) 0:00:41.053 ******* 2026-02-16 03:15:03.040612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-16 03:15:03.040631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:15:03.040661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:15:09.298518 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:09.298609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-16 03:15:09.298641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:15:09.298661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:15:09.298669 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:09.298676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-16 03:15:09.298684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 03:15:09.298692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 03:15:09.298699 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:09.298706 | orchestrator | 2026-02-16 03:15:09.298714 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-16 03:15:09.298723 | orchestrator | Monday 16 February 2026 03:15:03 +0000 (0:00:00.728) 0:00:41.782 ******* 2026-02-16 03:15:09.298730 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-16 03:15:09.298757 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-16 03:15:09.298765 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-16 03:15:09.298773 | orchestrator | 2026-02-16 03:15:09.298781 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-16 03:15:09.298789 | orchestrator | Monday 16 February 2026 03:15:04 +0000 (0:00:01.662) 0:00:43.445 ******* 2026-02-16 03:15:09.298797 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-16 03:15:09.298806 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-16 03:15:09.298814 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-16 03:15:09.298822 | orchestrator | 2026-02-16 03:15:09.298830 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-16 03:15:09.298838 | orchestrator | Monday 16 February 2026 03:15:06 +0000 (0:00:01.572) 0:00:45.017 ******* 2026-02-16 03:15:09.298845 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-16 03:15:09.298853 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-16 03:15:09.298861 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-16 03:15:09.298868 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-16 03:15:09.298876 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:09.298884 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-16 03:15:09.298891 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:09.298899 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-16 03:15:09.298910 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:09.298916 | orchestrator | 2026-02-16 03:15:09.298922 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-16 03:15:09.298928 | orchestrator | Monday 16 February 2026 03:15:07 +0000 (0:00:00.743) 0:00:45.760 ******* 2026-02-16 03:15:09.298986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-16 03:15:09.298995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-16 03:15:09.299001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-16 03:15:09.299022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 03:15:13.104194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 03:15:13.104307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 03:15:13.104338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 03:15:13.104352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 03:15:13.104363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 03:15:13.104399 | orchestrator | 2026-02-16 03:15:13.104414 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-16 03:15:13.104426 | orchestrator | Monday 16 February 2026 03:15:09 +0000 (0:00:02.280) 0:00:48.041 ******* 2026-02-16 03:15:13.104438 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:15:13.104449 | orchestrator | 2026-02-16 03:15:13.104461 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-16 03:15:13.104471 | orchestrator | Monday 16 February 2026 03:15:09 +0000 (0:00:00.716) 0:00:48.757 ******* 2026-02-16 03:15:13.104504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-16 03:15:13.104518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 03:15:13.104530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 03:15:13.104547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 03:15:13.104559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-16 03:15:13.104579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 03:15:13.104591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 03:15:13.104611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 03:15:13.713309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-16 03:15:13.713416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 03:15:13.713435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 03:15:13.713642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 03:15:13.713674 | orchestrator | 2026-02-16 03:15:13.713688 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-16 03:15:13.713701 | orchestrator | Monday 16 February 2026 03:15:13 +0000 (0:00:03.088) 0:00:51.846 ******* 2026-02-16 03:15:13.713714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-16 03:15:13.713747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 03:15:13.713760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 03:15:13.713778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 03:15:13.713791 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:13.713804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-16 03:15:13.713827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 03:15:13.713841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 03:15:13.713854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 03:15:13.713867 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:13.713890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-16 03:15:21.709528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 03:15:21.709651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 03:15:21.709666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 03:15:21.709677 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:21.709690 | orchestrator | 2026-02-16 03:15:21.709701 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-16 03:15:21.709713 | orchestrator | Monday 16 February 2026 03:15:13 +0000 (0:00:00.604) 0:00:52.450 ******* 2026-02-16 03:15:21.709723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-16 03:15:21.709736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-16 03:15:21.709747 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:21.709757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-16 03:15:21.709767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-16 03:15:21.709777 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:21.709786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-16 03:15:21.709824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-16 03:15:21.709834 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:21.709844 | orchestrator | 2026-02-16 03:15:21.709854 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-16 03:15:21.709864 | orchestrator | Monday 16 February 2026 03:15:14 +0000 (0:00:01.034) 0:00:53.485 ******* 2026-02-16 03:15:21.709873 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:15:21.709883 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:15:21.709892 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:15:21.709902 | orchestrator | 2026-02-16 03:15:21.709912 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-16 03:15:21.709921 | orchestrator | Monday 16 February 2026 03:15:15 +0000 (0:00:01.246) 0:00:54.731 ******* 2026-02-16 03:15:21.709931 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:15:21.709941 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:15:21.709951 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:15:21.710081 | orchestrator | 2026-02-16 03:15:21.710103 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-16 03:15:21.710115 | orchestrator | Monday 16 February 2026 03:15:17 +0000 (0:00:01.867) 0:00:56.599 ******* 2026-02-16 03:15:21.710127 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:15:21.710138 | orchestrator | 2026-02-16 03:15:21.710167 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-16 03:15:21.710186 | orchestrator | Monday 16 February 2026 03:15:18 +0000 (0:00:00.621) 0:00:57.220 ******* 2026-02-16 03:15:21.710200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-16 03:15:21.710215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 03:15:21.710227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-16 03:15:21.710240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 03:15:21.710252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 03:15:21.710283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 03:15:22.288791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-16 03:15:22.288895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 03:15:22.288910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 03:15:22.288924 | orchestrator | 2026-02-16 03:15:22.288937 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-16 03:15:22.288951 | orchestrator | Monday 16 February 2026 03:15:21 +0000 (0:00:03.230) 0:01:00.451 ******* 2026-02-16 03:15:22.289029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-16 03:15:22.289081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 03:15:22.289114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 03:15:22.289127 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:22.289140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-16 03:15:22.289151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 03:15:22.289163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 03:15:22.289182 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:22.289198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-16 03:15:22.289219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 03:15:31.355330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 03:15:31.355451 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:31.355470 | orchestrator | 2026-02-16 03:15:31.355483 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-16 03:15:31.355496 | orchestrator | Monday 16 February 2026 03:15:22 +0000 (0:00:00.578) 0:01:01.029 ******* 2026-02-16 03:15:31.355509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-16 03:15:31.355523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-16 03:15:31.355536 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:31.355547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-16 03:15:31.355558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-16 03:15:31.355569 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:31.355580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-16 03:15:31.355614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-16 03:15:31.355625 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:31.355637 | orchestrator | 2026-02-16 03:15:31.355648 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-16 03:15:31.355659 | orchestrator | Monday 16 February 2026 03:15:23 +0000 (0:00:00.757) 0:01:01.787 ******* 2026-02-16 03:15:31.355669 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:15:31.355680 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:15:31.355691 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:15:31.355702 | orchestrator | 2026-02-16 03:15:31.355713 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-16 03:15:31.355724 | orchestrator | Monday 16 February 2026 03:15:24 +0000 (0:00:01.451) 0:01:03.238 ******* 2026-02-16 03:15:31.355735 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:15:31.355746 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:15:31.355757 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:15:31.355767 | orchestrator | 2026-02-16 03:15:31.355778 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-16 03:15:31.355789 | orchestrator | Monday 16 February 2026 03:15:26 +0000 (0:00:01.942) 0:01:05.180 ******* 2026-02-16 03:15:31.355800 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:31.355810 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:31.355821 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:31.355832 | orchestrator | 2026-02-16 03:15:31.355843 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-16 03:15:31.355853 | orchestrator | Monday 16 February 2026 03:15:26 +0000 (0:00:00.281) 0:01:05.462 ******* 2026-02-16 03:15:31.355867 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:15:31.355879 | orchestrator | 2026-02-16 03:15:31.355906 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-16 03:15:31.355919 | orchestrator | Monday 16 February 2026 03:15:27 +0000 (0:00:00.612) 0:01:06.075 ******* 2026-02-16 03:15:31.355953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-16 03:15:31.355996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-16 03:15:31.356029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-16 03:15:31.356050 | orchestrator | 2026-02-16 03:15:31.356069 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-16 03:15:31.356084 | orchestrator | Monday 16 February 2026 03:15:29 +0000 (0:00:02.577) 0:01:08.652 ******* 2026-02-16 03:15:31.356096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-16 03:15:31.356107 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:31.356126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-16 03:15:31.356138 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:31.356158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-16 03:15:38.470969 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:38.471137 | orchestrator | 2026-02-16 03:15:38.471154 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-16 03:15:38.471192 | orchestrator | Monday 16 February 2026 03:15:31 +0000 (0:00:01.445) 0:01:10.098 ******* 2026-02-16 03:15:38.471207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-16 03:15:38.471221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-16 03:15:38.471234 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:38.471245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-16 03:15:38.471256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-16 03:15:38.471267 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:38.471279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-16 03:15:38.471290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-16 03:15:38.471301 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:38.471312 | orchestrator | 2026-02-16 03:15:38.471324 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-16 03:15:38.471335 | orchestrator | Monday 16 February 2026 03:15:32 +0000 (0:00:01.518) 0:01:11.616 ******* 2026-02-16 03:15:38.471346 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:38.471357 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:38.471368 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:38.471378 | orchestrator | 2026-02-16 03:15:38.471389 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-16 03:15:38.471400 | orchestrator | Monday 16 February 2026 03:15:33 +0000 (0:00:00.391) 0:01:12.008 ******* 2026-02-16 03:15:38.471411 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:38.471422 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:38.471432 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:38.471443 | orchestrator | 2026-02-16 03:15:38.471454 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-16 03:15:38.471465 | orchestrator | Monday 16 February 2026 03:15:34 +0000 (0:00:01.200) 0:01:13.208 ******* 2026-02-16 03:15:38.471483 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:15:38.471496 | orchestrator | 2026-02-16 03:15:38.471550 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-16 03:15:38.471564 | orchestrator | Monday 16 February 2026 03:15:35 +0000 (0:00:00.848) 0:01:14.057 ******* 2026-02-16 03:15:38.471606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-16 03:15:38.471634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 03:15:38.471658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 03:15:38.471681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 03:15:38.471710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-16 03:15:38.471756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 03:15:39.091053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 03:15:39.091157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 03:15:39.091174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-16 03:15:39.091206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 03:15:39.091219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 03:15:39.091275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 03:15:39.091288 | orchestrator | 2026-02-16 03:15:39.091302 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-16 03:15:39.091315 | orchestrator | Monday 16 February 2026 03:15:38 +0000 (0:00:03.235) 0:01:17.293 ******* 2026-02-16 03:15:39.091327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-16 03:15:39.091340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 03:15:39.091352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 03:15:39.091369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 03:15:39.091389 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:39.091410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-16 03:15:44.886515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 03:15:44.886632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 03:15:44.886651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 03:15:44.886666 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:44.886697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-16 03:15:44.886733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 03:15:44.886764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 03:15:44.886777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 03:15:44.886788 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:44.886799 | orchestrator | 2026-02-16 03:15:44.886812 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-16 03:15:44.886824 | orchestrator | Monday 16 February 2026 03:15:39 +0000 (0:00:00.641) 0:01:17.935 ******* 2026-02-16 03:15:44.886836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-16 03:15:44.886848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-16 03:15:44.886861 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:44.886872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-16 03:15:44.886883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-16 03:15:44.886902 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:44.886913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-16 03:15:44.886929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-16 03:15:44.886940 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:44.886951 | orchestrator | 2026-02-16 03:15:44.886962 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-16 03:15:44.886974 | orchestrator | Monday 16 February 2026 03:15:40 +0000 (0:00:01.055) 0:01:18.990 ******* 2026-02-16 03:15:44.886984 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:15:44.887025 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:15:44.887037 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:15:44.887050 | orchestrator | 2026-02-16 03:15:44.887063 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-16 03:15:44.887075 | orchestrator | Monday 16 February 2026 03:15:41 +0000 (0:00:01.265) 0:01:20.256 ******* 2026-02-16 03:15:44.887087 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:15:44.887100 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:15:44.887112 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:15:44.887124 | orchestrator | 2026-02-16 03:15:44.887137 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-16 03:15:44.887150 | orchestrator | Monday 16 February 2026 03:15:43 +0000 (0:00:01.928) 0:01:22.184 ******* 2026-02-16 03:15:44.887162 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:44.887174 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:44.887188 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:44.887200 | orchestrator | 2026-02-16 03:15:44.887213 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-16 03:15:44.887225 | orchestrator | Monday 16 February 2026 03:15:43 +0000 (0:00:00.282) 0:01:22.467 ******* 2026-02-16 03:15:44.887238 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:44.887250 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:44.887263 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:44.887275 | orchestrator | 2026-02-16 03:15:44.887287 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-16 03:15:44.887299 | orchestrator | Monday 16 February 2026 03:15:43 +0000 (0:00:00.265) 0:01:22.732 ******* 2026-02-16 03:15:44.887312 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:15:44.887325 | orchestrator | 2026-02-16 03:15:44.887337 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-16 03:15:44.887391 | orchestrator | Monday 16 February 2026 03:15:44 +0000 (0:00:00.893) 0:01:23.626 ******* 2026-02-16 03:15:47.998929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-16 03:15:47.999140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 03:15:47.999164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 03:15:47.999192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-16 03:15:47.999204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 03:15:47.999239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 03:15:47.999252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 03:15:47.999271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 03:15:47.999284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 03:15:47.999301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-16 03:15:47.999314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 03:15:47.999325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 03:15:47.999345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 03:15:48.832527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-16 03:15:48.832658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-16 03:15:48.832690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 03:15:48.832705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 03:15:48.832717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 03:15:48.832728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 03:15:48.832758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 03:15:48.832778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-16 03:15:48.832790 | orchestrator | 2026-02-16 03:15:48.832803 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-16 03:15:48.832815 | orchestrator | Monday 16 February 2026 03:15:48 +0000 (0:00:03.372) 0:01:26.999 ******* 2026-02-16 03:15:48.832833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-16 03:15:48.832846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 03:15:48.832858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 03:15:48.832869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 03:15:48.832895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 03:15:49.232065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 03:15:49.232173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-16 03:15:49.232231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-16 03:15:49.232247 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:49.232262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 03:15:49.232275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 03:15:49.232307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 03:15:49.232337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 03:15:49.232364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 03:15:49.232381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-16 03:15:49.232393 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:49.232411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-16 03:15:49.232432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 03:15:49.232466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 03:15:49.232497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 03:15:58.708509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 03:15:58.708653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 03:15:58.708682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-16 03:15:58.708702 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:58.708720 | orchestrator | 2026-02-16 03:15:58.708739 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-16 03:15:58.708758 | orchestrator | Monday 16 February 2026 03:15:49 +0000 (0:00:00.974) 0:01:27.974 ******* 2026-02-16 03:15:58.708775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-16 03:15:58.708815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-16 03:15:58.708828 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:58.708838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-16 03:15:58.708848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-16 03:15:58.708857 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:58.708867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-16 03:15:58.708877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-16 03:15:58.708886 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:58.708896 | orchestrator | 2026-02-16 03:15:58.708906 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-16 03:15:58.708916 | orchestrator | Monday 16 February 2026 03:15:50 +0000 (0:00:01.187) 0:01:29.161 ******* 2026-02-16 03:15:58.708926 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:15:58.708935 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:15:58.708945 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:15:58.708955 | orchestrator | 2026-02-16 03:15:58.708965 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-16 03:15:58.708975 | orchestrator | Monday 16 February 2026 03:15:51 +0000 (0:00:01.248) 0:01:30.409 ******* 2026-02-16 03:15:58.708984 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:15:58.708994 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:15:58.709003 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:15:58.709077 | orchestrator | 2026-02-16 03:15:58.709094 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-16 03:15:58.709105 | orchestrator | Monday 16 February 2026 03:15:53 +0000 (0:00:01.985) 0:01:32.395 ******* 2026-02-16 03:15:58.709132 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:15:58.709142 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:15:58.709152 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:15:58.709161 | orchestrator | 2026-02-16 03:15:58.709171 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-16 03:15:58.709181 | orchestrator | Monday 16 February 2026 03:15:53 +0000 (0:00:00.298) 0:01:32.694 ******* 2026-02-16 03:15:58.709190 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:15:58.709200 | orchestrator | 2026-02-16 03:15:58.709209 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-16 03:15:58.709219 | orchestrator | Monday 16 February 2026 03:15:54 +0000 (0:00:00.944) 0:01:33.639 ******* 2026-02-16 03:15:58.709240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-16 03:15:58.709263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-16 03:15:58.709289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-16 03:16:01.746773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-16 03:16:01.746894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-16 03:16:01.746954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-16 03:16:01.746970 | orchestrator | 2026-02-16 03:16:01.746983 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-16 03:16:01.746995 | orchestrator | Monday 16 February 2026 03:15:58 +0000 (0:00:03.920) 0:01:37.559 ******* 2026-02-16 03:16:01.747008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-16 03:16:01.747096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-16 03:16:05.161689 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:16:05.161799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-16 03:16:05.161844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-16 03:16:05.161895 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:16:05.161943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-16 03:16:05.161964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-16 03:16:05.161987 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:16:05.161998 | orchestrator | 2026-02-16 03:16:05.162011 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-16 03:16:05.162155 | orchestrator | Monday 16 February 2026 03:16:01 +0000 (0:00:03.032) 0:01:40.592 ******* 2026-02-16 03:16:05.162169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-16 03:16:05.162193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-16 03:16:13.065230 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:16:13.065338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-16 03:16:13.065357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-16 03:16:13.065371 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:16:13.065383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-16 03:16:13.065436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-16 03:16:13.065449 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:16:13.065460 | orchestrator | 2026-02-16 03:16:13.065472 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-16 03:16:13.065484 | orchestrator | Monday 16 February 2026 03:16:05 +0000 (0:00:03.312) 0:01:43.905 ******* 2026-02-16 03:16:13.065496 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:16:13.065507 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:16:13.065517 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:16:13.065528 | orchestrator | 2026-02-16 03:16:13.065539 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-16 03:16:13.065550 | orchestrator | Monday 16 February 2026 03:16:06 +0000 (0:00:01.277) 0:01:45.182 ******* 2026-02-16 03:16:13.065561 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:16:13.065571 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:16:13.065582 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:16:13.065592 | orchestrator | 2026-02-16 03:16:13.065603 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-16 03:16:13.065614 | orchestrator | Monday 16 February 2026 03:16:08 +0000 (0:00:01.926) 0:01:47.108 ******* 2026-02-16 03:16:13.065624 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:16:13.065635 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:16:13.065646 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:16:13.065657 | orchestrator | 2026-02-16 03:16:13.065667 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-16 03:16:13.065678 | orchestrator | Monday 16 February 2026 03:16:08 +0000 (0:00:00.304) 0:01:47.413 ******* 2026-02-16 03:16:13.065691 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:16:13.065704 | orchestrator | 2026-02-16 03:16:13.065716 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-16 03:16:13.065729 | orchestrator | Monday 16 February 2026 03:16:09 +0000 (0:00:00.996) 0:01:48.410 ******* 2026-02-16 03:16:13.065759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-16 03:16:13.065775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-16 03:16:13.065797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-16 03:16:13.065810 | orchestrator | 2026-02-16 03:16:13.065822 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-16 03:16:13.065833 | orchestrator | Monday 16 February 2026 03:16:12 +0000 (0:00:02.833) 0:01:51.244 ******* 2026-02-16 03:16:13.065849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-16 03:16:13.065861 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:16:13.065873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-16 03:16:13.065884 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:16:13.065896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-16 03:16:13.065907 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:16:13.065918 | orchestrator | 2026-02-16 03:16:13.065935 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-16 03:16:13.065954 | orchestrator | Monday 16 February 2026 03:16:12 +0000 (0:00:00.357) 0:01:51.601 ******* 2026-02-16 03:16:13.065975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-16 03:16:13.066005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-16 03:16:21.404807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-16 03:16:21.404920 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:16:21.404939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-16 03:16:21.404953 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:16:21.404965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-16 03:16:21.404976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-16 03:16:21.404987 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:16:21.404998 | orchestrator | 2026-02-16 03:16:21.405011 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-16 03:16:21.405024 | orchestrator | Monday 16 February 2026 03:16:13 +0000 (0:00:00.880) 0:01:52.482 ******* 2026-02-16 03:16:21.405035 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:16:21.405094 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:16:21.405106 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:16:21.405117 | orchestrator | 2026-02-16 03:16:21.405128 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-16 03:16:21.405139 | orchestrator | Monday 16 February 2026 03:16:15 +0000 (0:00:01.340) 0:01:53.822 ******* 2026-02-16 03:16:21.405151 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:16:21.405162 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:16:21.405173 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:16:21.405184 | orchestrator | 2026-02-16 03:16:21.405195 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-16 03:16:21.405206 | orchestrator | Monday 16 February 2026 03:16:16 +0000 (0:00:01.910) 0:01:55.733 ******* 2026-02-16 03:16:21.405217 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:16:21.405228 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:16:21.405239 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:16:21.405250 | orchestrator | 2026-02-16 03:16:21.405261 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-16 03:16:21.405272 | orchestrator | Monday 16 February 2026 03:16:17 +0000 (0:00:00.307) 0:01:56.041 ******* 2026-02-16 03:16:21.405283 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:16:21.405294 | orchestrator | 2026-02-16 03:16:21.405306 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-16 03:16:21.405317 | orchestrator | Monday 16 February 2026 03:16:18 +0000 (0:00:01.090) 0:01:57.132 ******* 2026-02-16 03:16:21.405394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-16 03:16:21.405445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-16 03:16:21.405473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-16 03:16:22.941115 | orchestrator | 2026-02-16 03:16:22.941203 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-16 03:16:22.941214 | orchestrator | Monday 16 February 2026 03:16:21 +0000 (0:00:03.014) 0:02:00.146 ******* 2026-02-16 03:16:22.941241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-16 03:16:22.941273 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:16:22.941300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-16 03:16:22.941309 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:16:22.941321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-16 03:16:22.941334 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:16:22.941341 | orchestrator | 2026-02-16 03:16:22.941349 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-16 03:16:22.941355 | orchestrator | Monday 16 February 2026 03:16:22 +0000 (0:00:00.634) 0:02:00.781 ******* 2026-02-16 03:16:22.941363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-16 03:16:22.941372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-16 03:16:22.941382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-16 03:16:22.941394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-16 03:16:31.117804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-16 03:16:31.117918 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:16:31.117938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-16 03:16:31.117954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-16 03:16:31.117968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-16 03:16:31.117998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-16 03:16:31.118012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-16 03:16:31.118171 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:16:31.118186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-16 03:16:31.118198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-16 03:16:31.118209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-16 03:16:31.118220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-16 03:16:31.118232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-16 03:16:31.118243 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:16:31.118254 | orchestrator | 2026-02-16 03:16:31.118266 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-16 03:16:31.118279 | orchestrator | Monday 16 February 2026 03:16:22 +0000 (0:00:00.900) 0:02:01.682 ******* 2026-02-16 03:16:31.118290 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:16:31.118301 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:16:31.118311 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:16:31.118322 | orchestrator | 2026-02-16 03:16:31.118333 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-16 03:16:31.118345 | orchestrator | Monday 16 February 2026 03:16:24 +0000 (0:00:01.524) 0:02:03.206 ******* 2026-02-16 03:16:31.118357 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:16:31.118370 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:16:31.118382 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:16:31.118395 | orchestrator | 2026-02-16 03:16:31.118408 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-16 03:16:31.118420 | orchestrator | Monday 16 February 2026 03:16:26 +0000 (0:00:01.940) 0:02:05.147 ******* 2026-02-16 03:16:31.118432 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:16:31.118444 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:16:31.118475 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:16:31.118488 | orchestrator | 2026-02-16 03:16:31.118501 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-16 03:16:31.118514 | orchestrator | Monday 16 February 2026 03:16:26 +0000 (0:00:00.330) 0:02:05.477 ******* 2026-02-16 03:16:31.118526 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:16:31.118537 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:16:31.118550 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:16:31.118562 | orchestrator | 2026-02-16 03:16:31.118573 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-16 03:16:31.118586 | orchestrator | Monday 16 February 2026 03:16:27 +0000 (0:00:00.286) 0:02:05.763 ******* 2026-02-16 03:16:31.118598 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:16:31.118610 | orchestrator | 2026-02-16 03:16:31.118622 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-16 03:16:31.118643 | orchestrator | Monday 16 February 2026 03:16:28 +0000 (0:00:01.082) 0:02:06.846 ******* 2026-02-16 03:16:31.118666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-16 03:16:31.118685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 03:16:31.118700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 03:16:31.118713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-16 03:16:31.118733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 03:16:31.670822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 03:16:31.670942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-16 03:16:31.670959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 03:16:31.670970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 03:16:31.670980 | orchestrator | 2026-02-16 03:16:31.670992 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-16 03:16:31.671004 | orchestrator | Monday 16 February 2026 03:16:31 +0000 (0:00:03.014) 0:02:09.860 ******* 2026-02-16 03:16:31.671031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-16 03:16:31.671089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 03:16:31.671106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 03:16:31.671117 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:16:31.671129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-16 03:16:31.671140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 03:16:31.671150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 03:16:31.671167 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:16:31.671185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-16 03:16:40.277882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 03:16:40.277956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 03:16:40.277963 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:16:40.277969 | orchestrator | 2026-02-16 03:16:40.277974 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-16 03:16:40.277979 | orchestrator | Monday 16 February 2026 03:16:31 +0000 (0:00:00.550) 0:02:10.411 ******* 2026-02-16 03:16:40.277984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-16 03:16:40.277991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-16 03:16:40.278006 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:16:40.278010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-16 03:16:40.278049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-16 03:16:40.278054 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:16:40.278138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-16 03:16:40.278147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-16 03:16:40.278155 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:16:40.278159 | orchestrator | 2026-02-16 03:16:40.278163 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-16 03:16:40.278167 | orchestrator | Monday 16 February 2026 03:16:32 +0000 (0:00:00.956) 0:02:11.367 ******* 2026-02-16 03:16:40.278171 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:16:40.278175 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:16:40.278179 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:16:40.278183 | orchestrator | 2026-02-16 03:16:40.278186 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-16 03:16:40.278190 | orchestrator | Monday 16 February 2026 03:16:33 +0000 (0:00:01.267) 0:02:12.635 ******* 2026-02-16 03:16:40.278194 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:16:40.278198 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:16:40.278201 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:16:40.278205 | orchestrator | 2026-02-16 03:16:40.278209 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-16 03:16:40.278212 | orchestrator | Monday 16 February 2026 03:16:35 +0000 (0:00:01.931) 0:02:14.566 ******* 2026-02-16 03:16:40.278216 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:16:40.278220 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:16:40.278224 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:16:40.278227 | orchestrator | 2026-02-16 03:16:40.278231 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-16 03:16:40.278255 | orchestrator | Monday 16 February 2026 03:16:36 +0000 (0:00:00.290) 0:02:14.856 ******* 2026-02-16 03:16:40.278264 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:16:40.278268 | orchestrator | 2026-02-16 03:16:40.278272 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-16 03:16:40.278276 | orchestrator | Monday 16 February 2026 03:16:37 +0000 (0:00:01.148) 0:02:16.005 ******* 2026-02-16 03:16:40.278281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 03:16:40.278288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 03:16:40.278297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 03:16:40.278301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 03:16:40.278314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 03:16:45.187689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 03:16:45.187799 | orchestrator | 2026-02-16 03:16:45.187816 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-16 03:16:45.187830 | orchestrator | Monday 16 February 2026 03:16:40 +0000 (0:00:03.011) 0:02:19.016 ******* 2026-02-16 03:16:45.187844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-16 03:16:45.187883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 03:16:45.187896 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:16:45.187909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-16 03:16:45.187954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 03:16:45.187967 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:16:45.187979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-16 03:16:45.188000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 03:16:45.188011 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:16:45.188023 | orchestrator | 2026-02-16 03:16:45.188034 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-16 03:16:45.188045 | orchestrator | Monday 16 February 2026 03:16:40 +0000 (0:00:00.605) 0:02:19.622 ******* 2026-02-16 03:16:45.188057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-16 03:16:45.188070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-16 03:16:45.188113 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:16:45.188124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-16 03:16:45.188135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-16 03:16:45.188146 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:16:45.188157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-16 03:16:45.188168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-16 03:16:45.188179 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:16:45.188190 | orchestrator | 2026-02-16 03:16:45.188201 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-16 03:16:45.188212 | orchestrator | Monday 16 February 2026 03:16:41 +0000 (0:00:00.852) 0:02:20.475 ******* 2026-02-16 03:16:45.188223 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:16:45.188233 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:16:45.188244 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:16:45.188255 | orchestrator | 2026-02-16 03:16:45.188271 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-16 03:16:45.188282 | orchestrator | Monday 16 February 2026 03:16:43 +0000 (0:00:01.531) 0:02:22.006 ******* 2026-02-16 03:16:45.188293 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:16:45.188304 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:16:45.188315 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:16:45.188326 | orchestrator | 2026-02-16 03:16:45.188337 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-16 03:16:45.188354 | orchestrator | Monday 16 February 2026 03:16:45 +0000 (0:00:01.920) 0:02:23.926 ******* 2026-02-16 03:16:49.554535 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:16:49.554618 | orchestrator | 2026-02-16 03:16:49.554630 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-16 03:16:49.554638 | orchestrator | Monday 16 February 2026 03:16:46 +0000 (0:00:00.997) 0:02:24.924 ******* 2026-02-16 03:16:49.554647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-16 03:16:49.554659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 03:16:49.554668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 03:16:49.554676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 03:16:49.554683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-16 03:16:49.554754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 03:16:49.554765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 03:16:49.554773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 03:16:49.554780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-16 03:16:49.554788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 03:16:49.554798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 03:16:49.554819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 03:16:50.468141 | orchestrator | 2026-02-16 03:16:50.468239 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-16 03:16:50.468256 | orchestrator | Monday 16 February 2026 03:16:49 +0000 (0:00:03.456) 0:02:28.381 ******* 2026-02-16 03:16:50.468272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-16 03:16:50.468289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 03:16:50.468304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 03:16:50.468317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 03:16:50.468329 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:16:50.468360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-16 03:16:50.468416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 03:16:50.468430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 03:16:50.468443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 03:16:50.468456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-16 03:16:50.468468 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:16:50.468481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 03:16:50.468506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 03:16:50.468526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 03:17:01.669906 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:17:01.670004 | orchestrator | 2026-02-16 03:17:01.670072 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-16 03:17:01.670086 | orchestrator | Monday 16 February 2026 03:16:50 +0000 (0:00:00.921) 0:02:29.302 ******* 2026-02-16 03:17:01.670114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-16 03:17:01.670125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-16 03:17:01.670134 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:17:01.670142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-16 03:17:01.670149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-16 03:17:01.670157 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:17:01.670165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-16 03:17:01.670172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-16 03:17:01.670180 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:17:01.670187 | orchestrator | 2026-02-16 03:17:01.670195 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-16 03:17:01.670202 | orchestrator | Monday 16 February 2026 03:16:51 +0000 (0:00:00.860) 0:02:30.162 ******* 2026-02-16 03:17:01.670209 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:17:01.670217 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:17:01.670224 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:17:01.670231 | orchestrator | 2026-02-16 03:17:01.670238 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-16 03:17:01.670261 | orchestrator | Monday 16 February 2026 03:16:52 +0000 (0:00:01.310) 0:02:31.472 ******* 2026-02-16 03:17:01.670269 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:17:01.670276 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:17:01.670283 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:17:01.670291 | orchestrator | 2026-02-16 03:17:01.670298 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-16 03:17:01.670305 | orchestrator | Monday 16 February 2026 03:16:54 +0000 (0:00:01.943) 0:02:33.415 ******* 2026-02-16 03:17:01.670313 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:17:01.670320 | orchestrator | 2026-02-16 03:17:01.670327 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-16 03:17:01.670334 | orchestrator | Monday 16 February 2026 03:16:55 +0000 (0:00:01.255) 0:02:34.671 ******* 2026-02-16 03:17:01.670342 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 03:17:01.670349 | orchestrator | 2026-02-16 03:17:01.670356 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-16 03:17:01.670364 | orchestrator | Monday 16 February 2026 03:16:59 +0000 (0:00:03.250) 0:02:37.922 ******* 2026-02-16 03:17:01.670397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 03:17:01.670409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-16 03:17:01.670418 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:17:01.670426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 03:17:01.670443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-16 03:17:01.670451 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:17:01.670466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 03:17:03.922703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-16 03:17:03.922813 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:17:03.922833 | orchestrator | 2026-02-16 03:17:03.922845 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-16 03:17:03.922856 | orchestrator | Monday 16 February 2026 03:17:01 +0000 (0:00:02.483) 0:02:40.406 ******* 2026-02-16 03:17:03.922888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 03:17:03.922903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-16 03:17:03.922913 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:17:03.922945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 03:17:03.922983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-16 03:17:03.922994 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:17:03.923011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 03:17:03.923031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-16 03:17:13.479267 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:17:13.479379 | orchestrator | 2026-02-16 03:17:13.479397 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-16 03:17:13.479410 | orchestrator | Monday 16 February 2026 03:17:03 +0000 (0:00:02.258) 0:02:42.664 ******* 2026-02-16 03:17:13.479423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-16 03:17:13.479439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-16 03:17:13.479451 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:17:13.479479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-16 03:17:13.479491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-16 03:17:13.479503 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:17:13.479514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-16 03:17:13.479548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-16 03:17:13.479560 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:17:13.479571 | orchestrator | 2026-02-16 03:17:13.479582 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-16 03:17:13.479594 | orchestrator | Monday 16 February 2026 03:17:06 +0000 (0:00:02.689) 0:02:45.353 ******* 2026-02-16 03:17:13.479605 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:17:13.479633 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:17:13.479644 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:17:13.479655 | orchestrator | 2026-02-16 03:17:13.479666 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-16 03:17:13.479677 | orchestrator | Monday 16 February 2026 03:17:08 +0000 (0:00:02.070) 0:02:47.423 ******* 2026-02-16 03:17:13.479688 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:17:13.479699 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:17:13.479709 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:17:13.479720 | orchestrator | 2026-02-16 03:17:13.479731 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-16 03:17:13.479741 | orchestrator | Monday 16 February 2026 03:17:10 +0000 (0:00:01.480) 0:02:48.904 ******* 2026-02-16 03:17:13.479752 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:17:13.479763 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:17:13.479773 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:17:13.479784 | orchestrator | 2026-02-16 03:17:13.479795 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-16 03:17:13.479805 | orchestrator | Monday 16 February 2026 03:17:10 +0000 (0:00:00.321) 0:02:49.226 ******* 2026-02-16 03:17:13.479816 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:17:13.479827 | orchestrator | 2026-02-16 03:17:13.479838 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-16 03:17:13.479848 | orchestrator | Monday 16 February 2026 03:17:11 +0000 (0:00:01.345) 0:02:50.571 ******* 2026-02-16 03:17:13.479866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-16 03:17:13.479881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-16 03:17:13.479904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-16 03:17:13.479916 | orchestrator | 2026-02-16 03:17:13.479927 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-16 03:17:13.479938 | orchestrator | Monday 16 February 2026 03:17:13 +0000 (0:00:01.448) 0:02:52.020 ******* 2026-02-16 03:17:13.479957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-16 03:17:21.538311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-16 03:17:21.538431 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:17:21.538455 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:17:21.538494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-16 03:17:21.538513 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:17:21.538529 | orchestrator | 2026-02-16 03:17:21.538546 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-16 03:17:21.538564 | orchestrator | Monday 16 February 2026 03:17:13 +0000 (0:00:00.392) 0:02:52.412 ******* 2026-02-16 03:17:21.538612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-16 03:17:21.538634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-16 03:17:21.538651 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:17:21.538669 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:17:21.538683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-16 03:17:21.538693 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:17:21.538702 | orchestrator | 2026-02-16 03:17:21.538712 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-16 03:17:21.538722 | orchestrator | Monday 16 February 2026 03:17:14 +0000 (0:00:00.935) 0:02:53.348 ******* 2026-02-16 03:17:21.538732 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:17:21.538742 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:17:21.538751 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:17:21.538761 | orchestrator | 2026-02-16 03:17:21.538770 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-16 03:17:21.538780 | orchestrator | Monday 16 February 2026 03:17:15 +0000 (0:00:00.452) 0:02:53.800 ******* 2026-02-16 03:17:21.538790 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:17:21.538799 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:17:21.538809 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:17:21.538818 | orchestrator | 2026-02-16 03:17:21.538828 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-16 03:17:21.538838 | orchestrator | Monday 16 February 2026 03:17:16 +0000 (0:00:01.209) 0:02:55.009 ******* 2026-02-16 03:17:21.538847 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:17:21.538856 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:17:21.538866 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:17:21.538875 | orchestrator | 2026-02-16 03:17:21.538888 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-16 03:17:21.538904 | orchestrator | Monday 16 February 2026 03:17:16 +0000 (0:00:00.311) 0:02:55.321 ******* 2026-02-16 03:17:21.538919 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:17:21.538935 | orchestrator | 2026-02-16 03:17:21.538950 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-16 03:17:21.538967 | orchestrator | Monday 16 February 2026 03:17:17 +0000 (0:00:01.389) 0:02:56.710 ******* 2026-02-16 03:17:21.539007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:17:21.539036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:21.539058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:21.539070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:21.539081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-16 03:17:21.539101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:21.697565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-16 03:17:21.697688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-16 03:17:21.697717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:21.697731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:17:21.697744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:21.697756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-16 03:17:21.697784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-16 03:17:21.697804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:21.697822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-16 03:17:21.697839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:17:21.697851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:17:21.697864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:21.697891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:17:21.799545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:21.799734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:21.799772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:21.799785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:21.799820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-16 03:17:21.799860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:21.799874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:21.799887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-16 03:17:21.799900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-16 03:17:21.799913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:21.799932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-16 03:17:21.799944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-16 03:17:21.799969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:21.897328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-16 03:17:21.897426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:17:21.897443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:21.897455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:21.897498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:17:21.897530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-16 03:17:21.897575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:21.897598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-16 03:17:21.897618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-16 03:17:21.897637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:21.897670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-16 03:17:21.897699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-16 03:17:21.897738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:22.986430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:17:22.986532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-16 03:17:22.986576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:17:22.986590 | orchestrator | 2026-02-16 03:17:22.986604 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-16 03:17:22.986617 | orchestrator | Monday 16 February 2026 03:17:21 +0000 (0:00:04.015) 0:03:00.726 ******* 2026-02-16 03:17:22.986644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:17:22.986676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:22.986689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:22.986702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:22.986721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-16 03:17:22.986733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:22.986751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-16 03:17:22.986764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-16 03:17:22.986784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:23.072304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:17:23.072454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:17:23.072474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:23.072502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:23.072514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:23.072548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-16 03:17:23.072570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:23.072582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-16 03:17:23.072594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-16 03:17:23.072613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:23.072625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:23.072645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-16 03:17:23.157285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:17:23.157394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-16 03:17:23.157413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:17:23.157445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:23.157458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-16 03:17:23.157492 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:17:23.157552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:23.157567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:23.157579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:17:23.157603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:23.157615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:23.157627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-16 03:17:23.157654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-16 03:17:23.374580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:23.374685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-16 03:17:23.374703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-16 03:17:23.374734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:23.374748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-16 03:17:23.374784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-16 03:17:23.374821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:23.374835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:17:23.374848 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:17:23.374863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:17:23.374882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:23.374894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-16 03:17:23.374915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-16 03:17:23.374937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-16 03:17:33.791811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-16 03:17:33.791926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:17:33.791945 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:17:33.791959 | orchestrator | 2026-02-16 03:17:33.791987 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-16 03:17:33.792001 | orchestrator | Monday 16 February 2026 03:17:23 +0000 (0:00:01.391) 0:03:02.117 ******* 2026-02-16 03:17:33.792014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-16 03:17:33.792046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-16 03:17:33.792091 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:17:33.792104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-16 03:17:33.792115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-16 03:17:33.792126 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:17:33.792136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-16 03:17:33.792176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-16 03:17:33.792187 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:17:33.792198 | orchestrator | 2026-02-16 03:17:33.792209 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-16 03:17:33.792221 | orchestrator | Monday 16 February 2026 03:17:25 +0000 (0:00:01.894) 0:03:04.011 ******* 2026-02-16 03:17:33.792232 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:17:33.792242 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:17:33.792253 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:17:33.792264 | orchestrator | 2026-02-16 03:17:33.792275 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-16 03:17:33.792287 | orchestrator | Monday 16 February 2026 03:17:26 +0000 (0:00:01.268) 0:03:05.279 ******* 2026-02-16 03:17:33.792298 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:17:33.792309 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:17:33.792320 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:17:33.792330 | orchestrator | 2026-02-16 03:17:33.792342 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-16 03:17:33.792355 | orchestrator | Monday 16 February 2026 03:17:28 +0000 (0:00:01.974) 0:03:07.254 ******* 2026-02-16 03:17:33.792367 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:17:33.792379 | orchestrator | 2026-02-16 03:17:33.792392 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-16 03:17:33.792424 | orchestrator | Monday 16 February 2026 03:17:29 +0000 (0:00:01.155) 0:03:08.410 ******* 2026-02-16 03:17:33.792440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-16 03:17:33.792460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-16 03:17:33.792486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-16 03:17:33.792499 | orchestrator | 2026-02-16 03:17:33.792512 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-16 03:17:33.792525 | orchestrator | Monday 16 February 2026 03:17:33 +0000 (0:00:03.596) 0:03:12.006 ******* 2026-02-16 03:17:33.792538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-16 03:17:33.792551 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:17:33.792573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-16 03:17:43.358492 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:17:43.358611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-16 03:17:43.358654 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:17:43.358667 | orchestrator | 2026-02-16 03:17:43.358694 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-16 03:17:43.358707 | orchestrator | Monday 16 February 2026 03:17:33 +0000 (0:00:00.526) 0:03:12.533 ******* 2026-02-16 03:17:43.358719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-16 03:17:43.358761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-16 03:17:43.358775 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:17:43.358786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-16 03:17:43.358797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-16 03:17:43.358808 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:17:43.358819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-16 03:17:43.358830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-16 03:17:43.358841 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:17:43.358851 | orchestrator | 2026-02-16 03:17:43.358862 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-16 03:17:43.358873 | orchestrator | Monday 16 February 2026 03:17:34 +0000 (0:00:00.759) 0:03:13.293 ******* 2026-02-16 03:17:43.358884 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:17:43.358895 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:17:43.358905 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:17:43.358916 | orchestrator | 2026-02-16 03:17:43.358927 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-16 03:17:43.358938 | orchestrator | Monday 16 February 2026 03:17:36 +0000 (0:00:01.758) 0:03:15.051 ******* 2026-02-16 03:17:43.358948 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:17:43.358959 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:17:43.358970 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:17:43.358981 | orchestrator | 2026-02-16 03:17:43.358991 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-16 03:17:43.359002 | orchestrator | Monday 16 February 2026 03:17:38 +0000 (0:00:01.770) 0:03:16.822 ******* 2026-02-16 03:17:43.359013 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:17:43.359024 | orchestrator | 2026-02-16 03:17:43.359036 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-16 03:17:43.359049 | orchestrator | Monday 16 February 2026 03:17:39 +0000 (0:00:01.462) 0:03:18.285 ******* 2026-02-16 03:17:43.359093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-16 03:17:43.359112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-16 03:17:43.359272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-16 03:17:43.359300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 03:17:43.359335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 03:17:44.532136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 03:17:44.532294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 03:17:44.532313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 03:17:44.532326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 03:17:44.532338 | orchestrator | 2026-02-16 03:17:44.532352 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-16 03:17:44.532364 | orchestrator | Monday 16 February 2026 03:17:43 +0000 (0:00:03.816) 0:03:22.101 ******* 2026-02-16 03:17:44.532380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-16 03:17:44.532440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 03:17:44.532459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 03:17:44.532471 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:17:44.532485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-16 03:17:44.532497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 03:17:44.532516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 03:17:44.532528 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:17:44.532569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-16 03:17:55.962834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 03:17:55.962976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 03:17:55.963001 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:17:55.963016 | orchestrator | 2026-02-16 03:17:55.963028 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-16 03:17:55.963041 | orchestrator | Monday 16 February 2026 03:17:44 +0000 (0:00:01.168) 0:03:23.269 ******* 2026-02-16 03:17:55.963053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-16 03:17:55.963068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-16 03:17:55.963109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-16 03:17:55.963124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-16 03:17:55.963137 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:17:55.963148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-16 03:17:55.963159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-16 03:17:55.963197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-16 03:17:55.963210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-16 03:17:55.963221 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:17:55.963233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-16 03:17:55.963244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-16 03:17:55.963255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-16 03:17:55.963302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-16 03:17:55.963315 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:17:55.963327 | orchestrator | 2026-02-16 03:17:55.963340 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-16 03:17:55.963353 | orchestrator | Monday 16 February 2026 03:17:45 +0000 (0:00:00.866) 0:03:24.136 ******* 2026-02-16 03:17:55.963366 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:17:55.963378 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:17:55.963391 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:17:55.963403 | orchestrator | 2026-02-16 03:17:55.963416 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-16 03:17:55.963428 | orchestrator | Monday 16 February 2026 03:17:46 +0000 (0:00:01.346) 0:03:25.482 ******* 2026-02-16 03:17:55.963441 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:17:55.963453 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:17:55.963465 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:17:55.963477 | orchestrator | 2026-02-16 03:17:55.963490 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-16 03:17:55.963502 | orchestrator | Monday 16 February 2026 03:17:48 +0000 (0:00:02.002) 0:03:27.485 ******* 2026-02-16 03:17:55.963515 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:17:55.963536 | orchestrator | 2026-02-16 03:17:55.963548 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-16 03:17:55.963561 | orchestrator | Monday 16 February 2026 03:17:50 +0000 (0:00:01.473) 0:03:28.958 ******* 2026-02-16 03:17:55.963573 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-16 03:17:55.963588 | orchestrator | 2026-02-16 03:17:55.963600 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-16 03:17:55.963612 | orchestrator | Monday 16 February 2026 03:17:51 +0000 (0:00:00.807) 0:03:29.766 ******* 2026-02-16 03:17:55.963627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-16 03:17:55.963643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-16 03:17:55.963657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-16 03:17:55.963670 | orchestrator | 2026-02-16 03:17:55.963683 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-16 03:17:55.963696 | orchestrator | Monday 16 February 2026 03:17:54 +0000 (0:00:03.662) 0:03:33.428 ******* 2026-02-16 03:17:55.963707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-16 03:17:55.963719 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:17:55.963743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-16 03:18:13.431601 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:13.431707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-16 03:18:13.431754 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:13.431767 | orchestrator | 2026-02-16 03:18:13.431780 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-16 03:18:13.431792 | orchestrator | Monday 16 February 2026 03:17:55 +0000 (0:00:01.276) 0:03:34.705 ******* 2026-02-16 03:18:13.431804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-16 03:18:13.431819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-16 03:18:13.431832 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:18:13.431843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-16 03:18:13.431855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-16 03:18:13.431866 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:13.431877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-16 03:18:13.431888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-16 03:18:13.431899 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:13.431910 | orchestrator | 2026-02-16 03:18:13.431921 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-16 03:18:13.431932 | orchestrator | Monday 16 February 2026 03:17:57 +0000 (0:00:01.449) 0:03:36.154 ******* 2026-02-16 03:18:13.431943 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:18:13.431953 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:18:13.431964 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:18:13.431975 | orchestrator | 2026-02-16 03:18:13.431986 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-16 03:18:13.431996 | orchestrator | Monday 16 February 2026 03:17:59 +0000 (0:00:02.405) 0:03:38.560 ******* 2026-02-16 03:18:13.432007 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:18:13.432018 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:18:13.432029 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:18:13.432039 | orchestrator | 2026-02-16 03:18:13.432050 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-16 03:18:13.432061 | orchestrator | Monday 16 February 2026 03:18:02 +0000 (0:00:02.825) 0:03:41.385 ******* 2026-02-16 03:18:13.432073 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-16 03:18:13.432085 | orchestrator | 2026-02-16 03:18:13.432096 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-16 03:18:13.432107 | orchestrator | Monday 16 February 2026 03:18:03 +0000 (0:00:01.012) 0:03:42.398 ******* 2026-02-16 03:18:13.432130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-16 03:18:13.432144 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:18:13.432190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-16 03:18:13.432235 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:13.432249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-16 03:18:13.432261 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:13.432274 | orchestrator | 2026-02-16 03:18:13.432287 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-16 03:18:13.432299 | orchestrator | Monday 16 February 2026 03:18:04 +0000 (0:00:01.029) 0:03:43.428 ******* 2026-02-16 03:18:13.432312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-16 03:18:13.432324 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:18:13.432338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-16 03:18:13.432351 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:13.432363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-16 03:18:13.432375 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:13.432386 | orchestrator | 2026-02-16 03:18:13.432397 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-16 03:18:13.432415 | orchestrator | Monday 16 February 2026 03:18:05 +0000 (0:00:01.215) 0:03:44.643 ******* 2026-02-16 03:18:13.432426 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:18:13.432437 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:13.432448 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:13.432459 | orchestrator | 2026-02-16 03:18:13.432469 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-16 03:18:13.432480 | orchestrator | Monday 16 February 2026 03:18:07 +0000 (0:00:01.402) 0:03:46.046 ******* 2026-02-16 03:18:13.432491 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:18:13.432503 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:18:13.432514 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:18:13.432524 | orchestrator | 2026-02-16 03:18:13.432535 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-16 03:18:13.432546 | orchestrator | Monday 16 February 2026 03:18:09 +0000 (0:00:02.524) 0:03:48.571 ******* 2026-02-16 03:18:13.432556 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:18:13.432567 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:18:13.432578 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:18:13.432588 | orchestrator | 2026-02-16 03:18:13.432599 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-16 03:18:13.432610 | orchestrator | Monday 16 February 2026 03:18:12 +0000 (0:00:02.497) 0:03:51.068 ******* 2026-02-16 03:18:13.432621 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-16 03:18:13.432632 | orchestrator | 2026-02-16 03:18:13.432655 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-16 03:18:27.403078 | orchestrator | Monday 16 February 2026 03:18:13 +0000 (0:00:01.101) 0:03:52.169 ******* 2026-02-16 03:18:27.403211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-16 03:18:27.403310 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:18:27.403332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-16 03:18:27.403345 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:27.403358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-16 03:18:27.403370 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:27.403381 | orchestrator | 2026-02-16 03:18:27.403394 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-16 03:18:27.403407 | orchestrator | Monday 16 February 2026 03:18:14 +0000 (0:00:01.227) 0:03:53.397 ******* 2026-02-16 03:18:27.403443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-16 03:18:27.403455 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:18:27.403466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-16 03:18:27.403477 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:27.403488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-16 03:18:27.403500 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:27.403510 | orchestrator | 2026-02-16 03:18:27.403522 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-16 03:18:27.403533 | orchestrator | Monday 16 February 2026 03:18:15 +0000 (0:00:01.242) 0:03:54.640 ******* 2026-02-16 03:18:27.403545 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:18:27.403556 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:27.403567 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:27.403577 | orchestrator | 2026-02-16 03:18:27.403603 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-16 03:18:27.403636 | orchestrator | Monday 16 February 2026 03:18:17 +0000 (0:00:01.664) 0:03:56.305 ******* 2026-02-16 03:18:27.403650 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:18:27.403663 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:18:27.403675 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:18:27.403687 | orchestrator | 2026-02-16 03:18:27.403699 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-16 03:18:27.403718 | orchestrator | Monday 16 February 2026 03:18:19 +0000 (0:00:02.248) 0:03:58.554 ******* 2026-02-16 03:18:27.403738 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:18:27.403756 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:18:27.403773 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:18:27.403791 | orchestrator | 2026-02-16 03:18:27.403810 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-16 03:18:27.403826 | orchestrator | Monday 16 February 2026 03:18:22 +0000 (0:00:03.028) 0:04:01.582 ******* 2026-02-16 03:18:27.403844 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:18:27.403864 | orchestrator | 2026-02-16 03:18:27.403882 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-16 03:18:27.403898 | orchestrator | Monday 16 February 2026 03:18:24 +0000 (0:00:01.508) 0:04:03.091 ******* 2026-02-16 03:18:27.403918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 03:18:27.403957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 03:18:27.403978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 03:18:27.403999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 03:18:27.404043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 03:18:28.089607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 03:18:28.089778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 03:18:28.089801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 03:18:28.089813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 03:18:28.089825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 03:18:28.089854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 03:18:28.089888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 03:18:28.089910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 03:18:28.089921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 03:18:28.089933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 03:18:28.089945 | orchestrator | 2026-02-16 03:18:28.089959 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-16 03:18:28.089971 | orchestrator | Monday 16 February 2026 03:18:27 +0000 (0:00:03.175) 0:04:06.266 ******* 2026-02-16 03:18:28.089984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-16 03:18:28.090002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 03:18:28.090087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 03:18:29.077187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 03:18:29.077319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 03:18:29.077336 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:18:29.077352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-16 03:18:29.077365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 03:18:29.077393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 03:18:29.077406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 03:18:29.077454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 03:18:29.077467 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:29.077479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-16 03:18:29.077491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 03:18:29.077503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 03:18:29.077520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 03:18:29.077532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 03:18:29.077552 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:29.077564 | orchestrator | 2026-02-16 03:18:29.077576 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-16 03:18:29.077589 | orchestrator | Monday 16 February 2026 03:18:28 +0000 (0:00:00.702) 0:04:06.969 ******* 2026-02-16 03:18:29.077608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-16 03:18:40.123202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-16 03:18:40.123369 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:18:40.123387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-16 03:18:40.123400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-16 03:18:40.123409 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:40.123416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-16 03:18:40.123423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-16 03:18:40.123430 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:40.123436 | orchestrator | 2026-02-16 03:18:40.123444 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-16 03:18:40.123453 | orchestrator | Monday 16 February 2026 03:18:29 +0000 (0:00:00.845) 0:04:07.815 ******* 2026-02-16 03:18:40.123461 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:18:40.123467 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:18:40.123473 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:18:40.123479 | orchestrator | 2026-02-16 03:18:40.123485 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-16 03:18:40.123492 | orchestrator | Monday 16 February 2026 03:18:30 +0000 (0:00:01.695) 0:04:09.511 ******* 2026-02-16 03:18:40.123498 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:18:40.123504 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:18:40.123511 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:18:40.123517 | orchestrator | 2026-02-16 03:18:40.123523 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-16 03:18:40.123530 | orchestrator | Monday 16 February 2026 03:18:32 +0000 (0:00:02.091) 0:04:11.602 ******* 2026-02-16 03:18:40.123536 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:18:40.123543 | orchestrator | 2026-02-16 03:18:40.123549 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-16 03:18:40.123555 | orchestrator | Monday 16 February 2026 03:18:34 +0000 (0:00:01.293) 0:04:12.896 ******* 2026-02-16 03:18:40.123600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-16 03:18:40.123609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-16 03:18:40.123633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-16 03:18:40.123641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-16 03:18:40.123649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-16 03:18:40.123666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-16 03:18:40.123672 | orchestrator | 2026-02-16 03:18:40.123678 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-16 03:18:40.123685 | orchestrator | Monday 16 February 2026 03:18:39 +0000 (0:00:05.023) 0:04:17.919 ******* 2026-02-16 03:18:40.123697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-16 03:18:44.416601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-16 03:18:44.416752 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:18:44.416783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-16 03:18:44.416813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-16 03:18:44.416827 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:44.416839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-16 03:18:44.416872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-16 03:18:44.416893 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:44.416904 | orchestrator | 2026-02-16 03:18:44.416917 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-16 03:18:44.416929 | orchestrator | Monday 16 February 2026 03:18:40 +0000 (0:00:00.946) 0:04:18.866 ******* 2026-02-16 03:18:44.416941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-16 03:18:44.416954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-16 03:18:44.416968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-16 03:18:44.416980 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:18:44.416992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-16 03:18:44.417008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-16 03:18:44.417021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-16 03:18:44.417032 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:44.417043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-16 03:18:44.417053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-16 03:18:44.417065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-16 03:18:44.417076 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:44.417086 | orchestrator | 2026-02-16 03:18:44.417098 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-16 03:18:44.417109 | orchestrator | Monday 16 February 2026 03:18:40 +0000 (0:00:00.867) 0:04:19.733 ******* 2026-02-16 03:18:44.417120 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:18:44.417131 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:44.417142 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:44.417152 | orchestrator | 2026-02-16 03:18:44.417163 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-16 03:18:44.417174 | orchestrator | Monday 16 February 2026 03:18:41 +0000 (0:00:00.397) 0:04:20.131 ******* 2026-02-16 03:18:44.417185 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:18:44.417196 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:44.417207 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:44.417218 | orchestrator | 2026-02-16 03:18:44.417229 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-16 03:18:44.417271 | orchestrator | Monday 16 February 2026 03:18:42 +0000 (0:00:01.338) 0:04:21.469 ******* 2026-02-16 03:18:44.417292 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:18:46.730699 | orchestrator | 2026-02-16 03:18:46.730811 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-16 03:18:46.730828 | orchestrator | Monday 16 February 2026 03:18:44 +0000 (0:00:01.690) 0:04:23.159 ******* 2026-02-16 03:18:46.730843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-16 03:18:46.730859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 03:18:46.730872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:46.730884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:46.730897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 03:18:46.730909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-16 03:18:46.731010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 03:18:46.731027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-16 03:18:46.731039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 03:18:46.731051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:46.731069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:46.731081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:46.731094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:46.731124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 03:18:48.265671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 03:18:48.265773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-16 03:18:48.265810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-16 03:18:48.265826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:48.265839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-16 03:18:48.265892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:48.265906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-16 03:18:48.265924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-16 03:18:48.265937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-16 03:18:48.265949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:48.265980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-16 03:18:48.926797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:48.926927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:48.926958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-16 03:18:48.926999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:48.927020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-16 03:18:48.927041 | orchestrator | 2026-02-16 03:18:48.927061 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-16 03:18:48.927081 | orchestrator | Monday 16 February 2026 03:18:48 +0000 (0:00:03.984) 0:04:27.144 ******* 2026-02-16 03:18:48.927101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-16 03:18:48.927151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 03:18:48.927199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:48.927221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:48.927241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 03:18:48.927306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-16 03:18:48.927334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-16 03:18:48.927372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 03:18:48.927410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-16 03:18:49.069922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:49.070094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:49.070134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:49.070153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:49.070191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 03:18:49.070206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-16 03:18:49.070222 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:18:49.070303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-16 03:18:49.070322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-16 03:18:49.070342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:49.070356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:49.070379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-16 03:18:49.070393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-16 03:18:49.070407 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:49.070420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 03:18:49.070444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:50.952662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:50.952827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 03:18:50.952892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-16 03:18:50.952918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-16 03:18:50.952941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:50.952960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 03:18:50.953005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-16 03:18:50.953027 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:50.953049 | orchestrator | 2026-02-16 03:18:50.953069 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-16 03:18:50.953086 | orchestrator | Monday 16 February 2026 03:18:49 +0000 (0:00:00.809) 0:04:27.953 ******* 2026-02-16 03:18:50.953107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-16 03:18:50.953150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-16 03:18:50.953173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-16 03:18:50.953195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-16 03:18:50.953217 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:18:50.953235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-16 03:18:50.953286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-16 03:18:50.953306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-16 03:18:50.953327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-16 03:18:50.953345 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:50.953365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-16 03:18:50.953383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-16 03:18:50.953402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-16 03:18:50.953421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-16 03:18:50.953440 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:50.953458 | orchestrator | 2026-02-16 03:18:50.953478 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-16 03:18:50.953497 | orchestrator | Monday 16 February 2026 03:18:50 +0000 (0:00:01.330) 0:04:29.284 ******* 2026-02-16 03:18:50.953515 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:18:50.953544 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:59.044181 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:59.044353 | orchestrator | 2026-02-16 03:18:59.044370 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-16 03:18:59.044402 | orchestrator | Monday 16 February 2026 03:18:50 +0000 (0:00:00.409) 0:04:29.694 ******* 2026-02-16 03:18:59.044412 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:18:59.044424 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:59.044440 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:59.044454 | orchestrator | 2026-02-16 03:18:59.044470 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-16 03:18:59.044485 | orchestrator | Monday 16 February 2026 03:18:52 +0000 (0:00:01.279) 0:04:30.974 ******* 2026-02-16 03:18:59.044499 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:18:59.044512 | orchestrator | 2026-02-16 03:18:59.044526 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-16 03:18:59.044540 | orchestrator | Monday 16 February 2026 03:18:53 +0000 (0:00:01.692) 0:04:32.667 ******* 2026-02-16 03:18:59.044574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 03:18:59.044599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 03:18:59.044617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 03:18:59.044635 | orchestrator | 2026-02-16 03:18:59.044651 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-16 03:18:59.044680 | orchestrator | Monday 16 February 2026 03:18:56 +0000 (0:00:02.090) 0:04:34.757 ******* 2026-02-16 03:18:59.044720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-16 03:18:59.044738 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:18:59.044761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-16 03:18:59.044780 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:59.044797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-16 03:18:59.044814 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:59.044829 | orchestrator | 2026-02-16 03:18:59.044844 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-16 03:18:59.044859 | orchestrator | Monday 16 February 2026 03:18:56 +0000 (0:00:00.398) 0:04:35.156 ******* 2026-02-16 03:18:59.044875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-16 03:18:59.044892 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:18:59.044908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-16 03:18:59.044933 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:59.044944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-16 03:18:59.044954 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:59.044965 | orchestrator | 2026-02-16 03:18:59.044975 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-16 03:18:59.044985 | orchestrator | Monday 16 February 2026 03:18:57 +0000 (0:00:00.938) 0:04:36.094 ******* 2026-02-16 03:18:59.044995 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:18:59.045005 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:18:59.045015 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:18:59.045025 | orchestrator | 2026-02-16 03:18:59.045036 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-16 03:18:59.045046 | orchestrator | Monday 16 February 2026 03:18:57 +0000 (0:00:00.426) 0:04:36.521 ******* 2026-02-16 03:18:59.045064 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:19:07.350371 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:19:07.350491 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:19:07.350506 | orchestrator | 2026-02-16 03:19:07.350518 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-16 03:19:07.350530 | orchestrator | Monday 16 February 2026 03:18:59 +0000 (0:00:01.266) 0:04:37.788 ******* 2026-02-16 03:19:07.350540 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:19:07.350550 | orchestrator | 2026-02-16 03:19:07.350560 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-16 03:19:07.350570 | orchestrator | Monday 16 February 2026 03:19:00 +0000 (0:00:01.438) 0:04:39.226 ******* 2026-02-16 03:19:07.350606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-16 03:19:07.350622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-16 03:19:07.350634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-16 03:19:07.350689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-16 03:19:07.350710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-16 03:19:07.350721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-16 03:19:07.350731 | orchestrator | 2026-02-16 03:19:07.350742 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-16 03:19:07.350753 | orchestrator | Monday 16 February 2026 03:19:06 +0000 (0:00:06.260) 0:04:45.487 ******* 2026-02-16 03:19:07.350764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-16 03:19:07.350782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-16 03:19:07.350800 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:19:12.778439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-16 03:19:12.778576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-16 03:19:12.778598 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:19:12.778611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-16 03:19:12.778639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-16 03:19:12.778648 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:19:12.778657 | orchestrator | 2026-02-16 03:19:12.778666 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-16 03:19:12.778687 | orchestrator | Monday 16 February 2026 03:19:07 +0000 (0:00:00.609) 0:04:46.096 ******* 2026-02-16 03:19:12.778712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-16 03:19:12.778724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-16 03:19:12.778734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-16 03:19:12.778742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-16 03:19:12.778750 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:19:12.778763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-16 03:19:12.778771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-16 03:19:12.778784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-16 03:19:12.778798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-16 03:19:12.778813 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:19:12.778826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-16 03:19:12.778849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-16 03:19:12.778862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-16 03:19:12.778876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-16 03:19:12.778889 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:19:12.778902 | orchestrator | 2026-02-16 03:19:12.778917 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-16 03:19:12.778931 | orchestrator | Monday 16 February 2026 03:19:08 +0000 (0:00:00.856) 0:04:46.953 ******* 2026-02-16 03:19:12.778945 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:19:12.778960 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:19:12.778974 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:19:12.778989 | orchestrator | 2026-02-16 03:19:12.779004 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-16 03:19:12.779017 | orchestrator | Monday 16 February 2026 03:19:09 +0000 (0:00:01.269) 0:04:48.222 ******* 2026-02-16 03:19:12.779031 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:19:12.779044 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:19:12.779057 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:19:12.779070 | orchestrator | 2026-02-16 03:19:12.779082 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-16 03:19:12.779096 | orchestrator | Monday 16 February 2026 03:19:11 +0000 (0:00:02.104) 0:04:50.326 ******* 2026-02-16 03:19:12.779115 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:19:12.779131 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:19:12.779146 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:19:12.779160 | orchestrator | 2026-02-16 03:19:12.779173 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-16 03:19:12.779186 | orchestrator | Monday 16 February 2026 03:19:12 +0000 (0:00:00.590) 0:04:50.917 ******* 2026-02-16 03:19:12.779201 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:19:12.779216 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:19:12.779231 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:19:12.779247 | orchestrator | 2026-02-16 03:19:12.779262 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-16 03:19:12.779303 | orchestrator | Monday 16 February 2026 03:19:12 +0000 (0:00:00.310) 0:04:51.228 ******* 2026-02-16 03:19:12.779313 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:19:12.779321 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:19:12.779340 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:19:55.607891 | orchestrator | 2026-02-16 03:19:55.608071 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-16 03:19:55.608091 | orchestrator | Monday 16 February 2026 03:19:12 +0000 (0:00:00.296) 0:04:51.525 ******* 2026-02-16 03:19:55.608104 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:19:55.608118 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:19:55.608130 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:19:55.608141 | orchestrator | 2026-02-16 03:19:55.608165 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-16 03:19:55.608177 | orchestrator | Monday 16 February 2026 03:19:13 +0000 (0:00:00.299) 0:04:51.824 ******* 2026-02-16 03:19:55.608189 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:19:55.608229 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:19:55.608241 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:19:55.608252 | orchestrator | 2026-02-16 03:19:55.608263 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-16 03:19:55.608274 | orchestrator | Monday 16 February 2026 03:19:13 +0000 (0:00:00.597) 0:04:52.422 ******* 2026-02-16 03:19:55.608285 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:19:55.608296 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:19:55.608307 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:19:55.608319 | orchestrator | 2026-02-16 03:19:55.608363 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-16 03:19:55.608406 | orchestrator | Monday 16 February 2026 03:19:14 +0000 (0:00:00.544) 0:04:52.966 ******* 2026-02-16 03:19:55.608425 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:19:55.608445 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:19:55.608462 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:19:55.608479 | orchestrator | 2026-02-16 03:19:55.608498 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-16 03:19:55.608516 | orchestrator | Monday 16 February 2026 03:19:14 +0000 (0:00:00.640) 0:04:53.607 ******* 2026-02-16 03:19:55.608534 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:19:55.608553 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:19:55.608571 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:19:55.608589 | orchestrator | 2026-02-16 03:19:55.608609 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-16 03:19:55.608621 | orchestrator | Monday 16 February 2026 03:19:15 +0000 (0:00:00.625) 0:04:54.232 ******* 2026-02-16 03:19:55.608631 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:19:55.608642 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:19:55.608653 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:19:55.608663 | orchestrator | 2026-02-16 03:19:55.608674 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-16 03:19:55.608685 | orchestrator | Monday 16 February 2026 03:19:16 +0000 (0:00:00.850) 0:04:55.083 ******* 2026-02-16 03:19:55.608696 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:19:55.608707 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:19:55.608718 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:19:55.608728 | orchestrator | 2026-02-16 03:19:55.608740 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-16 03:19:55.608750 | orchestrator | Monday 16 February 2026 03:19:17 +0000 (0:00:00.870) 0:04:55.953 ******* 2026-02-16 03:19:55.608761 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:19:55.608772 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:19:55.608782 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:19:55.608793 | orchestrator | 2026-02-16 03:19:55.608803 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-16 03:19:55.608815 | orchestrator | Monday 16 February 2026 03:19:18 +0000 (0:00:00.861) 0:04:56.814 ******* 2026-02-16 03:19:55.608825 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:19:55.608836 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:19:55.608847 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:19:55.608858 | orchestrator | 2026-02-16 03:19:55.608868 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-16 03:19:55.608879 | orchestrator | Monday 16 February 2026 03:19:22 +0000 (0:00:04.719) 0:05:01.534 ******* 2026-02-16 03:19:55.608890 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:19:55.608900 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:19:55.608911 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:19:55.608922 | orchestrator | 2026-02-16 03:19:55.608933 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-16 03:19:55.608943 | orchestrator | Monday 16 February 2026 03:19:25 +0000 (0:00:03.108) 0:05:04.642 ******* 2026-02-16 03:19:55.608954 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:19:55.608965 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:19:55.608976 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:19:55.608987 | orchestrator | 2026-02-16 03:19:55.609010 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-16 03:19:55.609021 | orchestrator | Monday 16 February 2026 03:19:41 +0000 (0:00:15.650) 0:05:20.292 ******* 2026-02-16 03:19:55.609032 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:19:55.609043 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:19:55.609054 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:19:55.609064 | orchestrator | 2026-02-16 03:19:55.609075 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-16 03:19:55.609087 | orchestrator | Monday 16 February 2026 03:19:42 +0000 (0:00:00.705) 0:05:20.998 ******* 2026-02-16 03:19:55.609098 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:19:55.609109 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:19:55.609120 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:19:55.609130 | orchestrator | 2026-02-16 03:19:55.609141 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-16 03:19:55.609152 | orchestrator | Monday 16 February 2026 03:19:46 +0000 (0:00:04.369) 0:05:25.368 ******* 2026-02-16 03:19:55.609163 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:19:55.609173 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:19:55.609184 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:19:55.609195 | orchestrator | 2026-02-16 03:19:55.609205 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-16 03:19:55.609216 | orchestrator | Monday 16 February 2026 03:19:47 +0000 (0:00:00.646) 0:05:26.015 ******* 2026-02-16 03:19:55.609227 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:19:55.609238 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:19:55.609248 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:19:55.609259 | orchestrator | 2026-02-16 03:19:55.609292 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-16 03:19:55.609304 | orchestrator | Monday 16 February 2026 03:19:47 +0000 (0:00:00.335) 0:05:26.350 ******* 2026-02-16 03:19:55.609315 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:19:55.609326 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:19:55.611554 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:19:55.611597 | orchestrator | 2026-02-16 03:19:55.611613 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-16 03:19:55.611638 | orchestrator | Monday 16 February 2026 03:19:47 +0000 (0:00:00.332) 0:05:26.683 ******* 2026-02-16 03:19:55.611650 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:19:55.611661 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:19:55.611672 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:19:55.611684 | orchestrator | 2026-02-16 03:19:55.611696 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-16 03:19:55.611707 | orchestrator | Monday 16 February 2026 03:19:48 +0000 (0:00:00.384) 0:05:27.067 ******* 2026-02-16 03:19:55.611718 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:19:55.611729 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:19:55.611740 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:19:55.611751 | orchestrator | 2026-02-16 03:19:55.611762 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-16 03:19:55.611773 | orchestrator | Monday 16 February 2026 03:19:48 +0000 (0:00:00.646) 0:05:27.714 ******* 2026-02-16 03:19:55.611783 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:19:55.611814 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:19:55.611826 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:19:55.611839 | orchestrator | 2026-02-16 03:19:55.611858 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-16 03:19:55.611878 | orchestrator | Monday 16 February 2026 03:19:49 +0000 (0:00:00.358) 0:05:28.072 ******* 2026-02-16 03:19:55.611897 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:19:55.611936 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:19:55.611948 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:19:55.611959 | orchestrator | 2026-02-16 03:19:55.611970 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-16 03:19:55.612005 | orchestrator | Monday 16 February 2026 03:19:54 +0000 (0:00:04.712) 0:05:32.785 ******* 2026-02-16 03:19:55.612018 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:19:55.612029 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:19:55.612040 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:19:55.612051 | orchestrator | 2026-02-16 03:19:55.612062 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:19:55.612074 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-16 03:19:55.612087 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-16 03:19:55.612098 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-16 03:19:55.612109 | orchestrator | 2026-02-16 03:19:55.612120 | orchestrator | 2026-02-16 03:19:55.612131 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:19:55.612142 | orchestrator | Monday 16 February 2026 03:19:54 +0000 (0:00:00.779) 0:05:33.564 ******* 2026-02-16 03:19:55.612153 | orchestrator | =============================================================================== 2026-02-16 03:19:55.612164 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 15.65s 2026-02-16 03:19:55.612174 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.26s 2026-02-16 03:19:55.612185 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.02s 2026-02-16 03:19:55.612196 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.72s 2026-02-16 03:19:55.612207 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.71s 2026-02-16 03:19:55.612218 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.37s 2026-02-16 03:19:55.612244 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.02s 2026-02-16 03:19:55.612267 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 3.98s 2026-02-16 03:19:55.612278 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.92s 2026-02-16 03:19:55.612289 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.82s 2026-02-16 03:19:55.612300 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.66s 2026-02-16 03:19:55.612311 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.60s 2026-02-16 03:19:55.612322 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.46s 2026-02-16 03:19:55.612349 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.37s 2026-02-16 03:19:55.612360 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.31s 2026-02-16 03:19:55.612371 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.30s 2026-02-16 03:19:55.612382 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.27s 2026-02-16 03:19:55.612393 | orchestrator | mariadb : Ensure mysql monitor user exist ------------------------------- 3.25s 2026-02-16 03:19:55.612404 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.24s 2026-02-16 03:19:55.612415 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.23s 2026-02-16 03:19:57.852721 | orchestrator | 2026-02-16 03:19:57 | INFO  | Task a7c038f6-efd7-4d80-b3b9-fa40419e98f4 (opensearch) was prepared for execution. 2026-02-16 03:19:57.852840 | orchestrator | 2026-02-16 03:19:57 | INFO  | It takes a moment until task a7c038f6-efd7-4d80-b3b9-fa40419e98f4 (opensearch) has been started and output is visible here. 2026-02-16 03:20:07.718252 | orchestrator | 2026-02-16 03:20:07.718435 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 03:20:07.718478 | orchestrator | 2026-02-16 03:20:07.718490 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 03:20:07.718500 | orchestrator | Monday 16 February 2026 03:20:01 +0000 (0:00:00.191) 0:00:00.191 ******* 2026-02-16 03:20:07.718510 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:20:07.718521 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:20:07.718531 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:20:07.718540 | orchestrator | 2026-02-16 03:20:07.718550 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 03:20:07.718560 | orchestrator | Monday 16 February 2026 03:20:02 +0000 (0:00:00.233) 0:00:00.425 ******* 2026-02-16 03:20:07.718571 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-16 03:20:07.718581 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-16 03:20:07.718590 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-16 03:20:07.718600 | orchestrator | 2026-02-16 03:20:07.718610 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-16 03:20:07.718619 | orchestrator | 2026-02-16 03:20:07.718642 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-16 03:20:07.718652 | orchestrator | Monday 16 February 2026 03:20:02 +0000 (0:00:00.332) 0:00:00.758 ******* 2026-02-16 03:20:07.718663 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:20:07.718673 | orchestrator | 2026-02-16 03:20:07.718682 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-16 03:20:07.718692 | orchestrator | Monday 16 February 2026 03:20:02 +0000 (0:00:00.425) 0:00:01.183 ******* 2026-02-16 03:20:07.718702 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-16 03:20:07.718711 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-16 03:20:07.718721 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-16 03:20:07.718730 | orchestrator | 2026-02-16 03:20:07.718740 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-16 03:20:07.718750 | orchestrator | Monday 16 February 2026 03:20:03 +0000 (0:00:00.617) 0:00:01.801 ******* 2026-02-16 03:20:07.718764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-16 03:20:07.718777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-16 03:20:07.718816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-16 03:20:07.718837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-16 03:20:07.718853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-16 03:20:07.718866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-16 03:20:07.718885 | orchestrator | 2026-02-16 03:20:07.718896 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-16 03:20:07.718908 | orchestrator | Monday 16 February 2026 03:20:05 +0000 (0:00:01.450) 0:00:03.251 ******* 2026-02-16 03:20:07.718919 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:20:07.718931 | orchestrator | 2026-02-16 03:20:07.718942 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-16 03:20:07.718954 | orchestrator | Monday 16 February 2026 03:20:05 +0000 (0:00:00.449) 0:00:03.701 ******* 2026-02-16 03:20:07.718974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-16 03:20:08.481008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-16 03:20:08.481103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-16 03:20:08.481118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-16 03:20:08.481196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-16 03:20:08.481236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-16 03:20:08.481254 | orchestrator | 2026-02-16 03:20:08.481272 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-16 03:20:08.481288 | orchestrator | Monday 16 February 2026 03:20:07 +0000 (0:00:02.230) 0:00:05.932 ******* 2026-02-16 03:20:08.481305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-16 03:20:08.481322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-16 03:20:08.481440 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:20:08.481454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-16 03:20:08.481480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-16 03:20:09.442869 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:20:09.442977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-16 03:20:09.443006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-16 03:20:09.443058 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:20:09.443079 | orchestrator | 2026-02-16 03:20:09.443101 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-16 03:20:09.443122 | orchestrator | Monday 16 February 2026 03:20:08 +0000 (0:00:00.762) 0:00:06.694 ******* 2026-02-16 03:20:09.443144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-16 03:20:09.443174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-16 03:20:09.443210 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:20:09.443230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-16 03:20:09.443250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-16 03:20:09.443283 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:20:09.443303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-16 03:20:09.443330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-16 03:20:09.443416 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:20:09.443439 | orchestrator | 2026-02-16 03:20:09.443460 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-16 03:20:09.443493 | orchestrator | Monday 16 February 2026 03:20:09 +0000 (0:00:00.954) 0:00:07.649 ******* 2026-02-16 03:20:17.397989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-16 03:20:17.398214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-16 03:20:17.398233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-16 03:20:17.398262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-16 03:20:17.398296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-16 03:20:17.398320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-16 03:20:17.398333 | orchestrator | 2026-02-16 03:20:17.398346 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-16 03:20:17.398386 | orchestrator | Monday 16 February 2026 03:20:11 +0000 (0:00:02.244) 0:00:09.894 ******* 2026-02-16 03:20:17.398398 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:20:17.398411 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:20:17.398422 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:20:17.398433 | orchestrator | 2026-02-16 03:20:17.398444 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-16 03:20:17.398455 | orchestrator | Monday 16 February 2026 03:20:13 +0000 (0:00:02.284) 0:00:12.179 ******* 2026-02-16 03:20:17.398466 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:20:17.398477 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:20:17.398487 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:20:17.398500 | orchestrator | 2026-02-16 03:20:17.398513 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-16 03:20:17.398525 | orchestrator | Monday 16 February 2026 03:20:15 +0000 (0:00:01.759) 0:00:13.939 ******* 2026-02-16 03:20:17.398538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-16 03:20:17.398558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-16 03:20:17.398595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-16 03:22:57.930933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-16 03:22:57.931072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-16 03:22:57.931105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-16 03:22:57.931139 | orchestrator | 2026-02-16 03:22:57.931151 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-16 03:22:57.931162 | orchestrator | Monday 16 February 2026 03:20:17 +0000 (0:00:01.671) 0:00:15.610 ******* 2026-02-16 03:22:57.931172 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:22:57.931183 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:22:57.931193 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:22:57.931202 | orchestrator | 2026-02-16 03:22:57.931213 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-16 03:22:57.931223 | orchestrator | Monday 16 February 2026 03:20:17 +0000 (0:00:00.265) 0:00:15.876 ******* 2026-02-16 03:22:57.931232 | orchestrator | 2026-02-16 03:22:57.931242 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-16 03:22:57.931252 | orchestrator | Monday 16 February 2026 03:20:17 +0000 (0:00:00.059) 0:00:15.936 ******* 2026-02-16 03:22:57.931261 | orchestrator | 2026-02-16 03:22:57.931271 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-16 03:22:57.931281 | orchestrator | Monday 16 February 2026 03:20:17 +0000 (0:00:00.065) 0:00:16.001 ******* 2026-02-16 03:22:57.931290 | orchestrator | 2026-02-16 03:22:57.931300 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-16 03:22:57.931324 | orchestrator | Monday 16 February 2026 03:20:17 +0000 (0:00:00.062) 0:00:16.063 ******* 2026-02-16 03:22:57.931335 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:22:57.931344 | orchestrator | 2026-02-16 03:22:57.931354 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-16 03:22:57.931364 | orchestrator | Monday 16 February 2026 03:20:18 +0000 (0:00:00.202) 0:00:16.266 ******* 2026-02-16 03:22:57.931373 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:22:57.931383 | orchestrator | 2026-02-16 03:22:57.931392 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-16 03:22:57.931402 | orchestrator | Monday 16 February 2026 03:20:18 +0000 (0:00:00.607) 0:00:16.873 ******* 2026-02-16 03:22:57.931411 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:22:57.931421 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:22:57.931432 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:22:57.931443 | orchestrator | 2026-02-16 03:22:57.931455 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-16 03:22:57.931466 | orchestrator | Monday 16 February 2026 03:21:30 +0000 (0:01:11.364) 0:01:28.238 ******* 2026-02-16 03:22:57.931477 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:22:57.931488 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:22:57.931499 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:22:57.931510 | orchestrator | 2026-02-16 03:22:57.931521 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-16 03:22:57.931532 | orchestrator | Monday 16 February 2026 03:22:47 +0000 (0:01:17.091) 0:02:45.330 ******* 2026-02-16 03:22:57.931600 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:22:57.931615 | orchestrator | 2026-02-16 03:22:57.931626 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-16 03:22:57.931638 | orchestrator | Monday 16 February 2026 03:22:47 +0000 (0:00:00.472) 0:02:45.802 ******* 2026-02-16 03:22:57.931649 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:22:57.931660 | orchestrator | 2026-02-16 03:22:57.931671 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-16 03:22:57.931682 | orchestrator | Monday 16 February 2026 03:22:50 +0000 (0:00:02.867) 0:02:48.670 ******* 2026-02-16 03:22:57.931693 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:22:57.931705 | orchestrator | 2026-02-16 03:22:57.931716 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-16 03:22:57.931727 | orchestrator | Monday 16 February 2026 03:22:52 +0000 (0:00:02.184) 0:02:50.854 ******* 2026-02-16 03:22:57.931738 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:22:57.931756 | orchestrator | 2026-02-16 03:22:57.931766 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-16 03:22:57.931776 | orchestrator | Monday 16 February 2026 03:22:55 +0000 (0:00:02.740) 0:02:53.594 ******* 2026-02-16 03:22:57.931786 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:22:57.931796 | orchestrator | 2026-02-16 03:22:57.931806 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:22:57.931816 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-16 03:22:57.931828 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-16 03:22:57.931838 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-16 03:22:57.931848 | orchestrator | 2026-02-16 03:22:57.931857 | orchestrator | 2026-02-16 03:22:57.931867 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:22:57.931877 | orchestrator | Monday 16 February 2026 03:22:57 +0000 (0:00:02.534) 0:02:56.128 ******* 2026-02-16 03:22:57.931887 | orchestrator | =============================================================================== 2026-02-16 03:22:57.931902 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 77.09s 2026-02-16 03:22:57.931912 | orchestrator | opensearch : Restart opensearch container ------------------------------ 71.36s 2026-02-16 03:22:57.931921 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.87s 2026-02-16 03:22:57.931931 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.74s 2026-02-16 03:22:57.931940 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.53s 2026-02-16 03:22:57.931950 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.28s 2026-02-16 03:22:57.931959 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.24s 2026-02-16 03:22:57.931969 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.23s 2026-02-16 03:22:57.931979 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.18s 2026-02-16 03:22:57.931988 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.76s 2026-02-16 03:22:57.931998 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.67s 2026-02-16 03:22:57.932007 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.45s 2026-02-16 03:22:57.932017 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.95s 2026-02-16 03:22:57.932026 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.76s 2026-02-16 03:22:57.932036 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.62s 2026-02-16 03:22:57.932046 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.61s 2026-02-16 03:22:57.932062 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2026-02-16 03:22:58.272036 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.45s 2026-02-16 03:22:58.272123 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.43s 2026-02-16 03:22:58.272134 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.33s 2026-02-16 03:23:00.664417 | orchestrator | 2026-02-16 03:23:00 | INFO  | Task 8cadd5d2-1628-451c-8ed0-155914297ac6 (memcached) was prepared for execution. 2026-02-16 03:23:00.664520 | orchestrator | 2026-02-16 03:23:00 | INFO  | It takes a moment until task 8cadd5d2-1628-451c-8ed0-155914297ac6 (memcached) has been started and output is visible here. 2026-02-16 03:23:12.129876 | orchestrator | 2026-02-16 03:23:12.129993 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 03:23:12.130108 | orchestrator | 2026-02-16 03:23:12.130134 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 03:23:12.130165 | orchestrator | Monday 16 February 2026 03:23:04 +0000 (0:00:00.247) 0:00:00.247 ******* 2026-02-16 03:23:12.130184 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:23:12.130205 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:23:12.130224 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:23:12.130242 | orchestrator | 2026-02-16 03:23:12.130258 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 03:23:12.130276 | orchestrator | Monday 16 February 2026 03:23:05 +0000 (0:00:00.272) 0:00:00.520 ******* 2026-02-16 03:23:12.130297 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-16 03:23:12.130316 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-16 03:23:12.130335 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-16 03:23:12.130354 | orchestrator | 2026-02-16 03:23:12.130372 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-16 03:23:12.130390 | orchestrator | 2026-02-16 03:23:12.130409 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-16 03:23:12.130430 | orchestrator | Monday 16 February 2026 03:23:05 +0000 (0:00:00.394) 0:00:00.915 ******* 2026-02-16 03:23:12.130451 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:23:12.130471 | orchestrator | 2026-02-16 03:23:12.130488 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-16 03:23:12.130507 | orchestrator | Monday 16 February 2026 03:23:05 +0000 (0:00:00.489) 0:00:01.404 ******* 2026-02-16 03:23:12.130527 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-16 03:23:12.130547 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-16 03:23:12.130566 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-16 03:23:12.130632 | orchestrator | 2026-02-16 03:23:12.130650 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-16 03:23:12.130669 | orchestrator | Monday 16 February 2026 03:23:06 +0000 (0:00:00.633) 0:00:02.038 ******* 2026-02-16 03:23:12.130687 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-16 03:23:12.130706 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-16 03:23:12.130726 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-16 03:23:12.130744 | orchestrator | 2026-02-16 03:23:12.130762 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-16 03:23:12.130781 | orchestrator | Monday 16 February 2026 03:23:08 +0000 (0:00:01.663) 0:00:03.701 ******* 2026-02-16 03:23:12.130798 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:23:12.130815 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:23:12.130830 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:23:12.130847 | orchestrator | 2026-02-16 03:23:12.130865 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-16 03:23:12.130882 | orchestrator | Monday 16 February 2026 03:23:09 +0000 (0:00:01.420) 0:00:05.122 ******* 2026-02-16 03:23:12.131012 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:23:12.131083 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:23:12.131104 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:23:12.131122 | orchestrator | 2026-02-16 03:23:12.131140 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:23:12.131160 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 03:23:12.131180 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 03:23:12.131200 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 03:23:12.131240 | orchestrator | 2026-02-16 03:23:12.131259 | orchestrator | 2026-02-16 03:23:12.131278 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:23:12.131296 | orchestrator | Monday 16 February 2026 03:23:11 +0000 (0:00:02.078) 0:00:07.201 ******* 2026-02-16 03:23:12.131313 | orchestrator | =============================================================================== 2026-02-16 03:23:12.131332 | orchestrator | memcached : Restart memcached container --------------------------------- 2.08s 2026-02-16 03:23:12.131351 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.66s 2026-02-16 03:23:12.131368 | orchestrator | memcached : Check memcached container ----------------------------------- 1.42s 2026-02-16 03:23:12.131388 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.63s 2026-02-16 03:23:12.131407 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.49s 2026-02-16 03:23:12.131425 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2026-02-16 03:23:12.131444 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-02-16 03:23:14.354193 | orchestrator | 2026-02-16 03:23:14 | INFO  | Task 6a2497e3-44ee-4f29-b711-51bdcc155503 (redis) was prepared for execution. 2026-02-16 03:23:14.354300 | orchestrator | 2026-02-16 03:23:14 | INFO  | It takes a moment until task 6a2497e3-44ee-4f29-b711-51bdcc155503 (redis) has been started and output is visible here. 2026-02-16 03:23:22.589400 | orchestrator | 2026-02-16 03:23:22.589498 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 03:23:22.589512 | orchestrator | 2026-02-16 03:23:22.589521 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 03:23:22.589528 | orchestrator | Monday 16 February 2026 03:23:18 +0000 (0:00:00.239) 0:00:00.239 ******* 2026-02-16 03:23:22.589536 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:23:22.589544 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:23:22.589552 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:23:22.589559 | orchestrator | 2026-02-16 03:23:22.589566 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 03:23:22.589574 | orchestrator | Monday 16 February 2026 03:23:18 +0000 (0:00:00.206) 0:00:00.445 ******* 2026-02-16 03:23:22.589581 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-16 03:23:22.589589 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-16 03:23:22.589657 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-16 03:23:22.589665 | orchestrator | 2026-02-16 03:23:22.589672 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-16 03:23:22.589679 | orchestrator | 2026-02-16 03:23:22.589687 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-16 03:23:22.589694 | orchestrator | Monday 16 February 2026 03:23:18 +0000 (0:00:00.293) 0:00:00.739 ******* 2026-02-16 03:23:22.589702 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:23:22.589710 | orchestrator | 2026-02-16 03:23:22.589718 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-16 03:23:22.589725 | orchestrator | Monday 16 February 2026 03:23:19 +0000 (0:00:00.422) 0:00:01.161 ******* 2026-02-16 03:23:22.589736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 03:23:22.589748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 03:23:22.589790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 03:23:22.589799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 03:23:22.589822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 03:23:22.589831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 03:23:22.589839 | orchestrator | 2026-02-16 03:23:22.589847 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-16 03:23:22.589854 | orchestrator | Monday 16 February 2026 03:23:20 +0000 (0:00:00.992) 0:00:02.153 ******* 2026-02-16 03:23:22.589862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 03:23:22.589876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 03:23:22.589888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 03:23:22.589896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 03:23:22.589909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 03:23:26.517773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 03:23:26.517993 | orchestrator | 2026-02-16 03:23:26.518064 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-16 03:23:26.518143 | orchestrator | Monday 16 February 2026 03:23:22 +0000 (0:00:02.223) 0:00:04.377 ******* 2026-02-16 03:23:26.518159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 03:23:26.518198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 03:23:26.518225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 03:23:26.518238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 03:23:26.518250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 03:23:26.518282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 03:23:26.518294 | orchestrator | 2026-02-16 03:23:26.518308 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-16 03:23:26.518320 | orchestrator | Monday 16 February 2026 03:23:24 +0000 (0:00:02.317) 0:00:06.695 ******* 2026-02-16 03:23:26.518333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 03:23:26.518355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 03:23:26.518373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 03:23:26.518387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 03:23:26.518400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 03:23:26.518421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 03:23:37.701296 | orchestrator | 2026-02-16 03:23:37.701410 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-16 03:23:37.701427 | orchestrator | Monday 16 February 2026 03:23:26 +0000 (0:00:01.413) 0:00:08.109 ******* 2026-02-16 03:23:37.701439 | orchestrator | 2026-02-16 03:23:37.701450 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-16 03:23:37.701485 | orchestrator | Monday 16 February 2026 03:23:26 +0000 (0:00:00.062) 0:00:08.172 ******* 2026-02-16 03:23:37.701497 | orchestrator | 2026-02-16 03:23:37.701508 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-16 03:23:37.701519 | orchestrator | Monday 16 February 2026 03:23:26 +0000 (0:00:00.065) 0:00:08.237 ******* 2026-02-16 03:23:37.701529 | orchestrator | 2026-02-16 03:23:37.701540 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-16 03:23:37.701551 | orchestrator | Monday 16 February 2026 03:23:26 +0000 (0:00:00.064) 0:00:08.302 ******* 2026-02-16 03:23:37.701562 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:23:37.701574 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:23:37.701585 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:23:37.701596 | orchestrator | 2026-02-16 03:23:37.701607 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-16 03:23:37.701645 | orchestrator | Monday 16 February 2026 03:23:34 +0000 (0:00:07.887) 0:00:16.189 ******* 2026-02-16 03:23:37.701656 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:23:37.701667 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:23:37.701678 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:23:37.701689 | orchestrator | 2026-02-16 03:23:37.701700 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:23:37.701711 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 03:23:37.701724 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 03:23:37.701735 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 03:23:37.701746 | orchestrator | 2026-02-16 03:23:37.701757 | orchestrator | 2026-02-16 03:23:37.701768 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:23:37.701779 | orchestrator | Monday 16 February 2026 03:23:37 +0000 (0:00:02.985) 0:00:19.175 ******* 2026-02-16 03:23:37.701801 | orchestrator | =============================================================================== 2026-02-16 03:23:37.701812 | orchestrator | redis : Restart redis container ----------------------------------------- 7.89s 2026-02-16 03:23:37.701823 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 2.99s 2026-02-16 03:23:37.701834 | orchestrator | redis : Copying over redis config files --------------------------------- 2.32s 2026-02-16 03:23:37.701845 | orchestrator | redis : Copying over default config.json files -------------------------- 2.22s 2026-02-16 03:23:37.701858 | orchestrator | redis : Check redis containers ------------------------------------------ 1.41s 2026-02-16 03:23:37.701870 | orchestrator | redis : Ensuring config directories exist ------------------------------- 0.99s 2026-02-16 03:23:37.701882 | orchestrator | redis : include_tasks --------------------------------------------------- 0.42s 2026-02-16 03:23:37.701894 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.29s 2026-02-16 03:23:37.701907 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.21s 2026-02-16 03:23:37.701919 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.19s 2026-02-16 03:23:39.969930 | orchestrator | 2026-02-16 03:23:39 | INFO  | Task 9237bb5a-1f90-4a31-bf15-9c83a1fe0bcf (mariadb) was prepared for execution. 2026-02-16 03:23:39.970011 | orchestrator | 2026-02-16 03:23:39 | INFO  | It takes a moment until task 9237bb5a-1f90-4a31-bf15-9c83a1fe0bcf (mariadb) has been started and output is visible here. 2026-02-16 03:23:52.077211 | orchestrator | 2026-02-16 03:23:52.077319 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 03:23:52.077335 | orchestrator | 2026-02-16 03:23:52.077347 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 03:23:52.077382 | orchestrator | Monday 16 February 2026 03:23:43 +0000 (0:00:00.122) 0:00:00.122 ******* 2026-02-16 03:23:52.077394 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:23:52.077406 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:23:52.077417 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:23:52.077427 | orchestrator | 2026-02-16 03:23:52.077438 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 03:23:52.077449 | orchestrator | Monday 16 February 2026 03:23:43 +0000 (0:00:00.211) 0:00:00.334 ******* 2026-02-16 03:23:52.077460 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-16 03:23:52.077472 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-16 03:23:52.077483 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-16 03:23:52.077494 | orchestrator | 2026-02-16 03:23:52.077505 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-16 03:23:52.077516 | orchestrator | 2026-02-16 03:23:52.077527 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-16 03:23:52.077538 | orchestrator | Monday 16 February 2026 03:23:44 +0000 (0:00:00.398) 0:00:00.732 ******* 2026-02-16 03:23:52.077549 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 03:23:52.077560 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-16 03:23:52.077571 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-16 03:23:52.077582 | orchestrator | 2026-02-16 03:23:52.077593 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-16 03:23:52.077603 | orchestrator | Monday 16 February 2026 03:23:44 +0000 (0:00:00.312) 0:00:01.045 ******* 2026-02-16 03:23:52.077615 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:23:52.077627 | orchestrator | 2026-02-16 03:23:52.077680 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-16 03:23:52.077691 | orchestrator | Monday 16 February 2026 03:23:45 +0000 (0:00:00.447) 0:00:01.492 ******* 2026-02-16 03:23:52.077722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-16 03:23:52.077799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-16 03:23:52.077833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-16 03:23:52.077848 | orchestrator | 2026-02-16 03:23:52.077861 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-16 03:23:52.077874 | orchestrator | Monday 16 February 2026 03:23:47 +0000 (0:00:02.157) 0:00:03.650 ******* 2026-02-16 03:23:52.077887 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:23:52.077901 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:23:52.077913 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:23:52.077934 | orchestrator | 2026-02-16 03:23:52.077949 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-16 03:23:52.077962 | orchestrator | Monday 16 February 2026 03:23:47 +0000 (0:00:00.525) 0:00:04.175 ******* 2026-02-16 03:23:52.077973 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:23:52.077984 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:23:52.077994 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:23:52.078005 | orchestrator | 2026-02-16 03:23:52.078073 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-16 03:23:52.078088 | orchestrator | Monday 16 February 2026 03:23:49 +0000 (0:00:01.303) 0:00:05.479 ******* 2026-02-16 03:23:52.078111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-16 03:23:59.346111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-16 03:23:59.346256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-16 03:23:59.346272 | orchestrator | 2026-02-16 03:23:59.346283 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-16 03:23:59.346292 | orchestrator | Monday 16 February 2026 03:23:52 +0000 (0:00:02.921) 0:00:08.400 ******* 2026-02-16 03:23:59.346300 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:23:59.346310 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:23:59.346317 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:23:59.346325 | orchestrator | 2026-02-16 03:23:59.346334 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-16 03:23:59.346357 | orchestrator | Monday 16 February 2026 03:23:53 +0000 (0:00:01.059) 0:00:09.460 ******* 2026-02-16 03:23:59.346365 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:23:59.346373 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:23:59.346380 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:23:59.346387 | orchestrator | 2026-02-16 03:23:59.346394 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-16 03:23:59.346401 | orchestrator | Monday 16 February 2026 03:23:56 +0000 (0:00:03.522) 0:00:12.983 ******* 2026-02-16 03:23:59.346409 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:23:59.346417 | orchestrator | 2026-02-16 03:23:59.346423 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-16 03:23:59.346430 | orchestrator | Monday 16 February 2026 03:23:57 +0000 (0:00:00.483) 0:00:13.466 ******* 2026-02-16 03:23:59.346443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 03:23:59.346464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 03:24:03.941032 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:24:03.941138 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:24:03.941174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 03:24:03.941215 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:24:03.941227 | orchestrator | 2026-02-16 03:24:03.941240 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-16 03:24:03.941252 | orchestrator | Monday 16 February 2026 03:23:59 +0000 (0:00:02.204) 0:00:15.670 ******* 2026-02-16 03:24:03.941265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 03:24:03.941278 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:24:03.941315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 03:24:03.941338 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:24:03.941351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 03:24:03.941369 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:24:03.941388 | orchestrator | 2026-02-16 03:24:03.941407 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-16 03:24:03.941426 | orchestrator | Monday 16 February 2026 03:24:01 +0000 (0:00:02.316) 0:00:17.987 ******* 2026-02-16 03:24:03.941467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 03:24:06.478642 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:24:06.478806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 03:24:06.478827 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:24:06.478852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 03:24:06.478887 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:24:06.478900 | orchestrator | 2026-02-16 03:24:06.478912 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-16 03:24:06.478940 | orchestrator | Monday 16 February 2026 03:24:03 +0000 (0:00:02.281) 0:00:20.269 ******* 2026-02-16 03:24:06.478972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-16 03:24:06.478988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-16 03:24:06.479019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-16 03:26:19.208465 | orchestrator | 2026-02-16 03:26:19.208619 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-16 03:26:19.208639 | orchestrator | Monday 16 February 2026 03:24:06 +0000 (0:00:02.535) 0:00:22.804 ******* 2026-02-16 03:26:19.208652 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:26:19.208665 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:26:19.208676 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:26:19.208688 | orchestrator | 2026-02-16 03:26:19.208700 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-16 03:26:19.208711 | orchestrator | Monday 16 February 2026 03:24:07 +0000 (0:00:00.819) 0:00:23.624 ******* 2026-02-16 03:26:19.208723 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:26:19.208734 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:26:19.208745 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:26:19.208756 | orchestrator | 2026-02-16 03:26:19.208767 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-16 03:26:19.208778 | orchestrator | Monday 16 February 2026 03:24:07 +0000 (0:00:00.481) 0:00:24.105 ******* 2026-02-16 03:26:19.208788 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:26:19.208821 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:26:19.208857 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:26:19.208868 | orchestrator | 2026-02-16 03:26:19.208879 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-16 03:26:19.208890 | orchestrator | Monday 16 February 2026 03:24:08 +0000 (0:00:00.310) 0:00:24.415 ******* 2026-02-16 03:26:19.208949 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-16 03:26:19.208966 | orchestrator | ...ignoring 2026-02-16 03:26:19.208979 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-16 03:26:19.208992 | orchestrator | ...ignoring 2026-02-16 03:26:19.209004 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-16 03:26:19.209017 | orchestrator | ...ignoring 2026-02-16 03:26:19.209030 | orchestrator | 2026-02-16 03:26:19.209042 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-16 03:26:19.209055 | orchestrator | Monday 16 February 2026 03:24:18 +0000 (0:00:10.800) 0:00:35.216 ******* 2026-02-16 03:26:19.209067 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:26:19.209079 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:26:19.209092 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:26:19.209104 | orchestrator | 2026-02-16 03:26:19.209116 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-16 03:26:19.209129 | orchestrator | Monday 16 February 2026 03:24:19 +0000 (0:00:00.384) 0:00:35.600 ******* 2026-02-16 03:26:19.209141 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:26:19.209154 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:26:19.209166 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:26:19.209178 | orchestrator | 2026-02-16 03:26:19.209190 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-16 03:26:19.209203 | orchestrator | Monday 16 February 2026 03:24:19 +0000 (0:00:00.622) 0:00:36.222 ******* 2026-02-16 03:26:19.209216 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:26:19.209228 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:26:19.209240 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:26:19.209252 | orchestrator | 2026-02-16 03:26:19.209265 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-16 03:26:19.209277 | orchestrator | Monday 16 February 2026 03:24:20 +0000 (0:00:00.381) 0:00:36.604 ******* 2026-02-16 03:26:19.209289 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:26:19.209299 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:26:19.209311 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:26:19.209322 | orchestrator | 2026-02-16 03:26:19.209337 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-16 03:26:19.209349 | orchestrator | Monday 16 February 2026 03:24:20 +0000 (0:00:00.404) 0:00:37.009 ******* 2026-02-16 03:26:19.209359 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:26:19.209370 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:26:19.209381 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:26:19.209392 | orchestrator | 2026-02-16 03:26:19.209403 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-16 03:26:19.209415 | orchestrator | Monday 16 February 2026 03:24:21 +0000 (0:00:00.382) 0:00:37.391 ******* 2026-02-16 03:26:19.209426 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:26:19.209436 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:26:19.209447 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:26:19.209458 | orchestrator | 2026-02-16 03:26:19.209469 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-16 03:26:19.209479 | orchestrator | Monday 16 February 2026 03:24:21 +0000 (0:00:00.591) 0:00:37.983 ******* 2026-02-16 03:26:19.209490 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:26:19.209501 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:26:19.209521 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-16 03:26:19.209532 | orchestrator | 2026-02-16 03:26:19.209543 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-16 03:26:19.209554 | orchestrator | Monday 16 February 2026 03:24:22 +0000 (0:00:00.390) 0:00:38.374 ******* 2026-02-16 03:26:19.209565 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:26:19.209575 | orchestrator | 2026-02-16 03:26:19.209586 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-16 03:26:19.209597 | orchestrator | Monday 16 February 2026 03:24:31 +0000 (0:00:09.951) 0:00:48.326 ******* 2026-02-16 03:26:19.209608 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:26:19.209618 | orchestrator | 2026-02-16 03:26:19.209629 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-16 03:26:19.209640 | orchestrator | Monday 16 February 2026 03:24:32 +0000 (0:00:00.115) 0:00:48.442 ******* 2026-02-16 03:26:19.209651 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:26:19.209680 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:26:19.209693 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:26:19.209704 | orchestrator | 2026-02-16 03:26:19.209715 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-16 03:26:19.209726 | orchestrator | Monday 16 February 2026 03:24:33 +0000 (0:00:00.973) 0:00:49.415 ******* 2026-02-16 03:26:19.209736 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:26:19.209747 | orchestrator | 2026-02-16 03:26:19.209758 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-16 03:26:19.209769 | orchestrator | Monday 16 February 2026 03:24:40 +0000 (0:00:07.199) 0:00:56.615 ******* 2026-02-16 03:26:19.209780 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:26:19.209791 | orchestrator | 2026-02-16 03:26:19.209802 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-16 03:26:19.209812 | orchestrator | Monday 16 February 2026 03:24:42 +0000 (0:00:02.549) 0:00:59.165 ******* 2026-02-16 03:26:19.209823 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:26:19.209851 | orchestrator | 2026-02-16 03:26:19.209862 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-16 03:26:19.209873 | orchestrator | Monday 16 February 2026 03:24:45 +0000 (0:00:02.401) 0:01:01.566 ******* 2026-02-16 03:26:19.209884 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:26:19.209894 | orchestrator | 2026-02-16 03:26:19.209905 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-16 03:26:19.209916 | orchestrator | Monday 16 February 2026 03:24:45 +0000 (0:00:00.109) 0:01:01.675 ******* 2026-02-16 03:26:19.209927 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:26:19.209937 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:26:19.209948 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:26:19.209959 | orchestrator | 2026-02-16 03:26:19.209970 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-16 03:26:19.209981 | orchestrator | Monday 16 February 2026 03:24:45 +0000 (0:00:00.315) 0:01:01.991 ******* 2026-02-16 03:26:19.209992 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:26:19.210002 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-16 03:26:19.210013 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:26:19.210102 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:26:19.210114 | orchestrator | 2026-02-16 03:26:19.210125 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-16 03:26:19.210136 | orchestrator | skipping: no hosts matched 2026-02-16 03:26:19.210147 | orchestrator | 2026-02-16 03:26:19.210158 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-16 03:26:19.210169 | orchestrator | 2026-02-16 03:26:19.210179 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-16 03:26:19.210190 | orchestrator | Monday 16 February 2026 03:24:46 +0000 (0:00:00.511) 0:01:02.503 ******* 2026-02-16 03:26:19.210209 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:26:19.210220 | orchestrator | 2026-02-16 03:26:19.210231 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-16 03:26:19.210242 | orchestrator | Monday 16 February 2026 03:25:08 +0000 (0:00:22.663) 0:01:25.166 ******* 2026-02-16 03:26:19.210253 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:26:19.210263 | orchestrator | 2026-02-16 03:26:19.210275 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-16 03:26:19.210285 | orchestrator | Monday 16 February 2026 03:25:20 +0000 (0:00:11.522) 0:01:36.689 ******* 2026-02-16 03:26:19.210296 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:26:19.210307 | orchestrator | 2026-02-16 03:26:19.210318 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-16 03:26:19.210329 | orchestrator | 2026-02-16 03:26:19.210339 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-16 03:26:19.210350 | orchestrator | Monday 16 February 2026 03:25:22 +0000 (0:00:02.293) 0:01:38.983 ******* 2026-02-16 03:26:19.210361 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:26:19.210372 | orchestrator | 2026-02-16 03:26:19.210383 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-16 03:26:19.210399 | orchestrator | Monday 16 February 2026 03:25:45 +0000 (0:00:22.580) 0:02:01.564 ******* 2026-02-16 03:26:19.210410 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:26:19.210421 | orchestrator | 2026-02-16 03:26:19.210432 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-16 03:26:19.210443 | orchestrator | Monday 16 February 2026 03:25:56 +0000 (0:00:11.569) 0:02:13.133 ******* 2026-02-16 03:26:19.210454 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:26:19.210465 | orchestrator | 2026-02-16 03:26:19.210476 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-16 03:26:19.210486 | orchestrator | 2026-02-16 03:26:19.210497 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-16 03:26:19.210508 | orchestrator | Monday 16 February 2026 03:25:59 +0000 (0:00:02.401) 0:02:15.534 ******* 2026-02-16 03:26:19.210518 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:26:19.210529 | orchestrator | 2026-02-16 03:26:19.210540 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-16 03:26:19.210551 | orchestrator | Monday 16 February 2026 03:26:10 +0000 (0:00:11.384) 0:02:26.919 ******* 2026-02-16 03:26:19.210562 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:26:19.210573 | orchestrator | 2026-02-16 03:26:19.210583 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-16 03:26:19.210594 | orchestrator | Monday 16 February 2026 03:26:16 +0000 (0:00:05.558) 0:02:32.478 ******* 2026-02-16 03:26:19.210605 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:26:19.210616 | orchestrator | 2026-02-16 03:26:19.210626 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-16 03:26:19.210637 | orchestrator | 2026-02-16 03:26:19.210648 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-16 03:26:19.210659 | orchestrator | Monday 16 February 2026 03:26:18 +0000 (0:00:02.413) 0:02:34.891 ******* 2026-02-16 03:26:19.210670 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:26:19.210680 | orchestrator | 2026-02-16 03:26:19.210691 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-16 03:26:19.210711 | orchestrator | Monday 16 February 2026 03:26:19 +0000 (0:00:00.635) 0:02:35.527 ******* 2026-02-16 03:26:31.471487 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:26:31.471599 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:26:31.471611 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:26:31.471623 | orchestrator | 2026-02-16 03:26:31.471638 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-16 03:26:31.471653 | orchestrator | Monday 16 February 2026 03:26:21 +0000 (0:00:02.261) 0:02:37.788 ******* 2026-02-16 03:26:31.471681 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:26:31.471734 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:26:31.471751 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:26:31.471764 | orchestrator | 2026-02-16 03:26:31.471778 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-16 03:26:31.471792 | orchestrator | Monday 16 February 2026 03:26:23 +0000 (0:00:02.115) 0:02:39.903 ******* 2026-02-16 03:26:31.471806 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:26:31.471819 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:26:31.471832 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:26:31.471844 | orchestrator | 2026-02-16 03:26:31.471906 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-16 03:26:31.471915 | orchestrator | Monday 16 February 2026 03:26:25 +0000 (0:00:02.332) 0:02:42.236 ******* 2026-02-16 03:26:31.471923 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:26:31.471931 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:26:31.471939 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:26:31.471947 | orchestrator | 2026-02-16 03:26:31.471955 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-16 03:26:31.471963 | orchestrator | Monday 16 February 2026 03:26:28 +0000 (0:00:02.169) 0:02:44.406 ******* 2026-02-16 03:26:31.471971 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:26:31.471981 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:26:31.471989 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:26:31.471996 | orchestrator | 2026-02-16 03:26:31.472005 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-16 03:26:31.472014 | orchestrator | Monday 16 February 2026 03:26:30 +0000 (0:00:02.727) 0:02:47.133 ******* 2026-02-16 03:26:31.472023 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:26:31.472032 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:26:31.472041 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:26:31.472050 | orchestrator | 2026-02-16 03:26:31.472059 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:26:31.472069 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-16 03:26:31.472080 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-16 03:26:31.472089 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-16 03:26:31.472098 | orchestrator | 2026-02-16 03:26:31.472107 | orchestrator | 2026-02-16 03:26:31.472116 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:26:31.472126 | orchestrator | Monday 16 February 2026 03:26:31 +0000 (0:00:00.361) 0:02:47.494 ******* 2026-02-16 03:26:31.472135 | orchestrator | =============================================================================== 2026-02-16 03:26:31.472144 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 45.24s 2026-02-16 03:26:31.472153 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 23.09s 2026-02-16 03:26:31.472162 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.38s 2026-02-16 03:26:31.472171 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.80s 2026-02-16 03:26:31.472194 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.95s 2026-02-16 03:26:31.472204 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.20s 2026-02-16 03:26:31.472213 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.56s 2026-02-16 03:26:31.472222 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.69s 2026-02-16 03:26:31.472231 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.52s 2026-02-16 03:26:31.472241 | orchestrator | mariadb : Copying over config.json files for services ------------------- 2.92s 2026-02-16 03:26:31.472261 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.73s 2026-02-16 03:26:31.472275 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.55s 2026-02-16 03:26:31.472295 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.54s 2026-02-16 03:26:31.472308 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.41s 2026-02-16 03:26:31.472320 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.40s 2026-02-16 03:26:31.472334 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.33s 2026-02-16 03:26:31.472348 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.32s 2026-02-16 03:26:31.472363 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.28s 2026-02-16 03:26:31.472372 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.26s 2026-02-16 03:26:31.472380 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.20s 2026-02-16 03:26:33.766533 | orchestrator | 2026-02-16 03:26:33 | INFO  | Task 38d79c0f-d142-4808-aad2-8a4eed592e7b (rabbitmq) was prepared for execution. 2026-02-16 03:26:33.766725 | orchestrator | 2026-02-16 03:26:33 | INFO  | It takes a moment until task 38d79c0f-d142-4808-aad2-8a4eed592e7b (rabbitmq) has been started and output is visible here. 2026-02-16 03:26:46.332064 | orchestrator | 2026-02-16 03:26:46.332201 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 03:26:46.332220 | orchestrator | 2026-02-16 03:26:46.332232 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 03:26:46.332243 | orchestrator | Monday 16 February 2026 03:26:37 +0000 (0:00:00.164) 0:00:00.164 ******* 2026-02-16 03:26:46.332259 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:26:46.332278 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:26:46.332296 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:26:46.332312 | orchestrator | 2026-02-16 03:26:46.332331 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 03:26:46.332349 | orchestrator | Monday 16 February 2026 03:26:38 +0000 (0:00:00.279) 0:00:00.444 ******* 2026-02-16 03:26:46.332368 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-16 03:26:46.332386 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-16 03:26:46.332405 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-16 03:26:46.332424 | orchestrator | 2026-02-16 03:26:46.332444 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-16 03:26:46.332464 | orchestrator | 2026-02-16 03:26:46.332483 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-16 03:26:46.332503 | orchestrator | Monday 16 February 2026 03:26:38 +0000 (0:00:00.523) 0:00:00.968 ******* 2026-02-16 03:26:46.332523 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:26:46.332545 | orchestrator | 2026-02-16 03:26:46.332565 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-16 03:26:46.332579 | orchestrator | Monday 16 February 2026 03:26:39 +0000 (0:00:00.493) 0:00:01.461 ******* 2026-02-16 03:26:46.332592 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:26:46.332605 | orchestrator | 2026-02-16 03:26:46.332617 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-16 03:26:46.332630 | orchestrator | Monday 16 February 2026 03:26:40 +0000 (0:00:00.958) 0:00:02.419 ******* 2026-02-16 03:26:46.332642 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:26:46.332657 | orchestrator | 2026-02-16 03:26:46.332669 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-16 03:26:46.332682 | orchestrator | Monday 16 February 2026 03:26:40 +0000 (0:00:00.340) 0:00:02.759 ******* 2026-02-16 03:26:46.332695 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:26:46.332732 | orchestrator | 2026-02-16 03:26:46.332746 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-16 03:26:46.332759 | orchestrator | Monday 16 February 2026 03:26:40 +0000 (0:00:00.346) 0:00:03.106 ******* 2026-02-16 03:26:46.332770 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:26:46.332781 | orchestrator | 2026-02-16 03:26:46.332791 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-16 03:26:46.332802 | orchestrator | Monday 16 February 2026 03:26:41 +0000 (0:00:00.332) 0:00:03.439 ******* 2026-02-16 03:26:46.332813 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:26:46.332824 | orchestrator | 2026-02-16 03:26:46.332835 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-16 03:26:46.332846 | orchestrator | Monday 16 February 2026 03:26:41 +0000 (0:00:00.504) 0:00:03.943 ******* 2026-02-16 03:26:46.332856 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:26:46.332892 | orchestrator | 2026-02-16 03:26:46.332904 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-16 03:26:46.332930 | orchestrator | Monday 16 February 2026 03:26:42 +0000 (0:00:00.843) 0:00:04.786 ******* 2026-02-16 03:26:46.332940 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:26:46.332950 | orchestrator | 2026-02-16 03:26:46.332959 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-16 03:26:46.332969 | orchestrator | Monday 16 February 2026 03:26:43 +0000 (0:00:00.842) 0:00:05.629 ******* 2026-02-16 03:26:46.332978 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:26:46.332988 | orchestrator | 2026-02-16 03:26:46.332997 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-16 03:26:46.333007 | orchestrator | Monday 16 February 2026 03:26:43 +0000 (0:00:00.363) 0:00:05.993 ******* 2026-02-16 03:26:46.333016 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:26:46.333026 | orchestrator | 2026-02-16 03:26:46.333035 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-16 03:26:46.333045 | orchestrator | Monday 16 February 2026 03:26:44 +0000 (0:00:00.352) 0:00:06.345 ******* 2026-02-16 03:26:46.333096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 03:26:46.333113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 03:26:46.333134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 03:26:46.333145 | orchestrator | 2026-02-16 03:26:46.333155 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-16 03:26:46.333165 | orchestrator | Monday 16 February 2026 03:26:44 +0000 (0:00:00.771) 0:00:07.117 ******* 2026-02-16 03:26:46.333193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 03:26:46.333237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 03:27:04.142148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 03:27:04.142402 | orchestrator | 2026-02-16 03:27:04.142436 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-16 03:27:04.142450 | orchestrator | Monday 16 February 2026 03:26:46 +0000 (0:00:01.509) 0:00:08.626 ******* 2026-02-16 03:27:04.142463 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-16 03:27:04.142475 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-16 03:27:04.142486 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-16 03:27:04.142497 | orchestrator | 2026-02-16 03:27:04.142508 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-16 03:27:04.142519 | orchestrator | Monday 16 February 2026 03:26:47 +0000 (0:00:01.390) 0:00:10.016 ******* 2026-02-16 03:27:04.142530 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-16 03:27:04.142542 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-16 03:27:04.142553 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-16 03:27:04.142564 | orchestrator | 2026-02-16 03:27:04.142589 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-16 03:27:04.142600 | orchestrator | Monday 16 February 2026 03:26:49 +0000 (0:00:01.590) 0:00:11.606 ******* 2026-02-16 03:27:04.142611 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-16 03:27:04.142622 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-16 03:27:04.142633 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-16 03:27:04.142644 | orchestrator | 2026-02-16 03:27:04.142655 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-16 03:27:04.142665 | orchestrator | Monday 16 February 2026 03:26:50 +0000 (0:00:01.299) 0:00:12.906 ******* 2026-02-16 03:27:04.142676 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-16 03:27:04.142687 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-16 03:27:04.142698 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-16 03:27:04.142709 | orchestrator | 2026-02-16 03:27:04.142720 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-16 03:27:04.142730 | orchestrator | Monday 16 February 2026 03:26:52 +0000 (0:00:01.554) 0:00:14.461 ******* 2026-02-16 03:27:04.142744 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-16 03:27:04.142762 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-16 03:27:04.142790 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-16 03:27:04.142827 | orchestrator | 2026-02-16 03:27:04.142846 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-16 03:27:04.142864 | orchestrator | Monday 16 February 2026 03:26:53 +0000 (0:00:01.320) 0:00:15.781 ******* 2026-02-16 03:27:04.142880 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-16 03:27:04.142925 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-16 03:27:04.142945 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-16 03:27:04.142963 | orchestrator | 2026-02-16 03:27:04.142980 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-16 03:27:04.142997 | orchestrator | Monday 16 February 2026 03:26:54 +0000 (0:00:01.358) 0:00:17.139 ******* 2026-02-16 03:27:04.143015 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:27:04.143033 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:27:04.143074 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:27:04.143093 | orchestrator | 2026-02-16 03:27:04.143111 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-16 03:27:04.143130 | orchestrator | Monday 16 February 2026 03:26:55 +0000 (0:00:00.397) 0:00:17.537 ******* 2026-02-16 03:27:04.143153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 03:27:04.143187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 03:27:04.143202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 03:27:04.143225 | orchestrator | 2026-02-16 03:27:04.143236 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-16 03:27:04.143247 | orchestrator | Monday 16 February 2026 03:26:56 +0000 (0:00:01.108) 0:00:18.645 ******* 2026-02-16 03:27:04.143258 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:27:04.143269 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:27:04.143280 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:27:04.143291 | orchestrator | 2026-02-16 03:27:04.143302 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-16 03:27:04.143312 | orchestrator | Monday 16 February 2026 03:26:57 +0000 (0:00:00.779) 0:00:19.424 ******* 2026-02-16 03:27:04.143323 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:27:04.143334 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:27:04.143345 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:27:04.143356 | orchestrator | 2026-02-16 03:27:04.143367 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-16 03:27:04.143394 | orchestrator | Monday 16 February 2026 03:27:04 +0000 (0:00:07.008) 0:00:26.433 ******* 2026-02-16 03:28:43.641900 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:28:43.642147 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:28:43.642168 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:28:43.642180 | orchestrator | 2026-02-16 03:28:43.642194 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-16 03:28:43.642206 | orchestrator | 2026-02-16 03:28:43.642217 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-16 03:28:43.642228 | orchestrator | Monday 16 February 2026 03:27:04 +0000 (0:00:00.437) 0:00:26.870 ******* 2026-02-16 03:28:43.642239 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:28:43.642251 | orchestrator | 2026-02-16 03:28:43.642262 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-16 03:28:43.642289 | orchestrator | Monday 16 February 2026 03:27:05 +0000 (0:00:00.594) 0:00:27.465 ******* 2026-02-16 03:28:43.642300 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:28:43.642311 | orchestrator | 2026-02-16 03:28:43.642336 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-16 03:28:43.642349 | orchestrator | Monday 16 February 2026 03:27:05 +0000 (0:00:00.238) 0:00:27.703 ******* 2026-02-16 03:28:43.642361 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:28:43.642373 | orchestrator | 2026-02-16 03:28:43.642385 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-16 03:28:43.642397 | orchestrator | Monday 16 February 2026 03:27:06 +0000 (0:00:01.602) 0:00:29.306 ******* 2026-02-16 03:28:43.642409 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:28:43.642421 | orchestrator | 2026-02-16 03:28:43.642433 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-16 03:28:43.642445 | orchestrator | 2026-02-16 03:28:43.642458 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-16 03:28:43.642470 | orchestrator | Monday 16 February 2026 03:28:02 +0000 (0:00:55.917) 0:01:25.223 ******* 2026-02-16 03:28:43.642483 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:28:43.642495 | orchestrator | 2026-02-16 03:28:43.642507 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-16 03:28:43.642520 | orchestrator | Monday 16 February 2026 03:28:03 +0000 (0:00:00.636) 0:01:25.860 ******* 2026-02-16 03:28:43.642532 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:28:43.642544 | orchestrator | 2026-02-16 03:28:43.642581 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-16 03:28:43.642595 | orchestrator | Monday 16 February 2026 03:28:03 +0000 (0:00:00.219) 0:01:26.079 ******* 2026-02-16 03:28:43.642608 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:28:43.642620 | orchestrator | 2026-02-16 03:28:43.642632 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-16 03:28:43.642644 | orchestrator | Monday 16 February 2026 03:28:10 +0000 (0:00:06.489) 0:01:32.569 ******* 2026-02-16 03:28:43.642656 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:28:43.642668 | orchestrator | 2026-02-16 03:28:43.642682 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-16 03:28:43.642693 | orchestrator | 2026-02-16 03:28:43.642704 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-16 03:28:43.642730 | orchestrator | Monday 16 February 2026 03:28:20 +0000 (0:00:10.404) 0:01:42.974 ******* 2026-02-16 03:28:43.642758 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:28:43.642769 | orchestrator | 2026-02-16 03:28:43.642792 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-16 03:28:43.642803 | orchestrator | Monday 16 February 2026 03:28:21 +0000 (0:00:00.721) 0:01:43.695 ******* 2026-02-16 03:28:43.642813 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:28:43.642824 | orchestrator | 2026-02-16 03:28:43.642835 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-16 03:28:43.642846 | orchestrator | Monday 16 February 2026 03:28:21 +0000 (0:00:00.227) 0:01:43.923 ******* 2026-02-16 03:28:43.642857 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:28:43.642868 | orchestrator | 2026-02-16 03:28:43.642879 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-16 03:28:43.642890 | orchestrator | Monday 16 February 2026 03:28:23 +0000 (0:00:01.508) 0:01:45.432 ******* 2026-02-16 03:28:43.642901 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:28:43.642912 | orchestrator | 2026-02-16 03:28:43.642923 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-16 03:28:43.642934 | orchestrator | 2026-02-16 03:28:43.642944 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-16 03:28:43.642955 | orchestrator | Monday 16 February 2026 03:28:40 +0000 (0:00:17.314) 0:02:02.746 ******* 2026-02-16 03:28:43.642966 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:28:43.642977 | orchestrator | 2026-02-16 03:28:43.642988 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-16 03:28:43.642998 | orchestrator | Monday 16 February 2026 03:28:40 +0000 (0:00:00.479) 0:02:03.226 ******* 2026-02-16 03:28:43.643009 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-16 03:28:43.643040 | orchestrator | enable_outward_rabbitmq_True 2026-02-16 03:28:43.643052 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-16 03:28:43.643063 | orchestrator | outward_rabbitmq_restart 2026-02-16 03:28:43.643073 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:28:43.643084 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:28:43.643095 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:28:43.643106 | orchestrator | 2026-02-16 03:28:43.643117 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-16 03:28:43.643127 | orchestrator | skipping: no hosts matched 2026-02-16 03:28:43.643138 | orchestrator | 2026-02-16 03:28:43.643149 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-16 03:28:43.643160 | orchestrator | skipping: no hosts matched 2026-02-16 03:28:43.643171 | orchestrator | 2026-02-16 03:28:43.643181 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-16 03:28:43.643192 | orchestrator | skipping: no hosts matched 2026-02-16 03:28:43.643203 | orchestrator | 2026-02-16 03:28:43.643214 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:28:43.643246 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-16 03:28:43.643270 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:28:43.643281 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:28:43.643292 | orchestrator | 2026-02-16 03:28:43.643303 | orchestrator | 2026-02-16 03:28:43.643314 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:28:43.643325 | orchestrator | Monday 16 February 2026 03:28:43 +0000 (0:00:02.417) 0:02:05.644 ******* 2026-02-16 03:28:43.643336 | orchestrator | =============================================================================== 2026-02-16 03:28:43.643347 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 83.64s 2026-02-16 03:28:43.643358 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.60s 2026-02-16 03:28:43.643368 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.01s 2026-02-16 03:28:43.643379 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.42s 2026-02-16 03:28:43.643390 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.95s 2026-02-16 03:28:43.643400 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.59s 2026-02-16 03:28:43.643411 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.55s 2026-02-16 03:28:43.643422 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.51s 2026-02-16 03:28:43.643433 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.39s 2026-02-16 03:28:43.643444 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.36s 2026-02-16 03:28:43.643454 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.32s 2026-02-16 03:28:43.643465 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.30s 2026-02-16 03:28:43.643476 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.11s 2026-02-16 03:28:43.643487 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.96s 2026-02-16 03:28:43.643497 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.84s 2026-02-16 03:28:43.643508 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.84s 2026-02-16 03:28:43.643519 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.78s 2026-02-16 03:28:43.643543 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.77s 2026-02-16 03:28:43.643554 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.69s 2026-02-16 03:28:43.643566 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2026-02-16 03:28:45.881991 | orchestrator | 2026-02-16 03:28:45 | INFO  | Task da5288cc-ed71-4657-970c-05b04e3d43d7 (openvswitch) was prepared for execution. 2026-02-16 03:28:45.882162 | orchestrator | 2026-02-16 03:28:45 | INFO  | It takes a moment until task da5288cc-ed71-4657-970c-05b04e3d43d7 (openvswitch) has been started and output is visible here. 2026-02-16 03:28:57.187019 | orchestrator | 2026-02-16 03:28:57.187218 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 03:28:57.187235 | orchestrator | 2026-02-16 03:28:57.187248 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 03:28:57.187260 | orchestrator | Monday 16 February 2026 03:28:49 +0000 (0:00:00.225) 0:00:00.225 ******* 2026-02-16 03:28:57.187272 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:28:57.187284 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:28:57.187295 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:28:57.187306 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:28:57.187342 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:28:57.187353 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:28:57.187364 | orchestrator | 2026-02-16 03:28:57.187375 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 03:28:57.187386 | orchestrator | Monday 16 February 2026 03:28:50 +0000 (0:00:00.493) 0:00:00.718 ******* 2026-02-16 03:28:57.187398 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-16 03:28:57.187410 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-16 03:28:57.187421 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-16 03:28:57.187432 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-16 03:28:57.187443 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-16 03:28:57.187454 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-16 03:28:57.187465 | orchestrator | 2026-02-16 03:28:57.187475 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-16 03:28:57.187486 | orchestrator | 2026-02-16 03:28:57.187497 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-16 03:28:57.187509 | orchestrator | Monday 16 February 2026 03:28:50 +0000 (0:00:00.498) 0:00:01.217 ******* 2026-02-16 03:28:57.187521 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:28:57.187534 | orchestrator | 2026-02-16 03:28:57.187546 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-16 03:28:57.187556 | orchestrator | Monday 16 February 2026 03:28:51 +0000 (0:00:00.915) 0:00:02.133 ******* 2026-02-16 03:28:57.187567 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-16 03:28:57.187579 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-16 03:28:57.187590 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-16 03:28:57.187601 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-16 03:28:57.187612 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-16 03:28:57.187622 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-16 03:28:57.187633 | orchestrator | 2026-02-16 03:28:57.187645 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-16 03:28:57.187656 | orchestrator | Monday 16 February 2026 03:28:52 +0000 (0:00:00.996) 0:00:03.129 ******* 2026-02-16 03:28:57.187674 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-16 03:28:57.187691 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-16 03:28:57.187707 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-16 03:28:57.187725 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-16 03:28:57.187744 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-16 03:28:57.187763 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-16 03:28:57.187783 | orchestrator | 2026-02-16 03:28:57.187801 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-16 03:28:57.187820 | orchestrator | Monday 16 February 2026 03:28:54 +0000 (0:00:01.322) 0:00:04.451 ******* 2026-02-16 03:28:57.187834 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-16 03:28:57.187844 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:28:57.187856 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-16 03:28:57.187867 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:28:57.187877 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-16 03:28:57.187888 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:28:57.187899 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-16 03:28:57.187909 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:28:57.187920 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-16 03:28:57.187937 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:28:57.187948 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-16 03:28:57.187959 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:28:57.187969 | orchestrator | 2026-02-16 03:28:57.187980 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-16 03:28:57.187991 | orchestrator | Monday 16 February 2026 03:28:55 +0000 (0:00:01.079) 0:00:05.530 ******* 2026-02-16 03:28:57.188002 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:28:57.188012 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:28:57.188023 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:28:57.188152 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:28:57.188223 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:28:57.188235 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:28:57.188245 | orchestrator | 2026-02-16 03:28:57.188256 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-16 03:28:57.188268 | orchestrator | Monday 16 February 2026 03:28:55 +0000 (0:00:00.718) 0:00:06.249 ******* 2026-02-16 03:28:57.188304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 03:28:57.188321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 03:28:57.188333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 03:28:57.188345 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 03:28:57.188366 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 03:28:57.188392 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 03:28:59.469254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 03:28:59.469364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 03:28:59.469423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 03:28:59.469445 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 03:28:59.469498 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 03:28:59.469564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 03:28:59.469580 | orchestrator | 2026-02-16 03:28:59.469594 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-16 03:28:59.469607 | orchestrator | Monday 16 February 2026 03:28:57 +0000 (0:00:01.353) 0:00:07.602 ******* 2026-02-16 03:28:59.469619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 03:28:59.469632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 03:28:59.469643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 03:28:59.469664 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 03:28:59.469681 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 03:28:59.469702 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 03:29:01.955632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 03:29:01.955743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 03:29:01.955783 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 03:29:01.955797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 03:29:01.955823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 03:29:01.955855 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 03:29:01.955868 | orchestrator | 2026-02-16 03:29:01.955882 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-16 03:29:01.955895 | orchestrator | Monday 16 February 2026 03:28:59 +0000 (0:00:02.280) 0:00:09.883 ******* 2026-02-16 03:29:01.955907 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:29:01.955919 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:29:01.955930 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:29:01.955941 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:29:01.955952 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:29:01.955963 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:29:01.955973 | orchestrator | 2026-02-16 03:29:01.955985 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-16 03:29:01.955996 | orchestrator | Monday 16 February 2026 03:29:00 +0000 (0:00:00.876) 0:00:10.760 ******* 2026-02-16 03:29:01.956008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 03:29:01.956035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 03:29:01.956074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 03:29:01.956092 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 03:29:01.956115 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 03:29:27.043177 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 03:29:27.043312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 03:29:27.043327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 03:29:27.043352 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 03:29:27.043363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 03:29:27.043389 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 03:29:27.043400 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 03:29:27.043421 | orchestrator | 2026-02-16 03:29:27.043433 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-16 03:29:27.043444 | orchestrator | Monday 16 February 2026 03:29:02 +0000 (0:00:01.614) 0:00:12.375 ******* 2026-02-16 03:29:27.043454 | orchestrator | 2026-02-16 03:29:27.043464 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-16 03:29:27.043474 | orchestrator | Monday 16 February 2026 03:29:02 +0000 (0:00:00.277) 0:00:12.653 ******* 2026-02-16 03:29:27.043483 | orchestrator | 2026-02-16 03:29:27.043493 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-16 03:29:27.043502 | orchestrator | Monday 16 February 2026 03:29:02 +0000 (0:00:00.130) 0:00:12.783 ******* 2026-02-16 03:29:27.043512 | orchestrator | 2026-02-16 03:29:27.043521 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-16 03:29:27.043531 | orchestrator | Monday 16 February 2026 03:29:02 +0000 (0:00:00.127) 0:00:12.911 ******* 2026-02-16 03:29:27.043540 | orchestrator | 2026-02-16 03:29:27.043550 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-16 03:29:27.043560 | orchestrator | Monday 16 February 2026 03:29:02 +0000 (0:00:00.128) 0:00:13.040 ******* 2026-02-16 03:29:27.043569 | orchestrator | 2026-02-16 03:29:27.043579 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-16 03:29:27.043588 | orchestrator | Monday 16 February 2026 03:29:02 +0000 (0:00:00.129) 0:00:13.169 ******* 2026-02-16 03:29:27.043598 | orchestrator | 2026-02-16 03:29:27.043608 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-16 03:29:27.043617 | orchestrator | Monday 16 February 2026 03:29:02 +0000 (0:00:00.139) 0:00:13.308 ******* 2026-02-16 03:29:27.043627 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:29:27.043638 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:29:27.043647 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:29:27.043657 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:29:27.043666 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:29:27.043676 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:29:27.043687 | orchestrator | 2026-02-16 03:29:27.043699 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-16 03:29:27.043711 | orchestrator | Monday 16 February 2026 03:29:11 +0000 (0:00:08.734) 0:00:22.042 ******* 2026-02-16 03:29:27.043723 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:29:27.043735 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:29:27.043746 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:29:27.043757 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:29:27.043768 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:29:27.043780 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:29:27.043791 | orchestrator | 2026-02-16 03:29:27.043802 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-16 03:29:27.043817 | orchestrator | Monday 16 February 2026 03:29:12 +0000 (0:00:01.025) 0:00:23.068 ******* 2026-02-16 03:29:27.043829 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:29:27.043841 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:29:27.043852 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:29:27.043863 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:29:27.043875 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:29:27.043886 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:29:27.043897 | orchestrator | 2026-02-16 03:29:27.043908 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-16 03:29:27.043927 | orchestrator | Monday 16 February 2026 03:29:20 +0000 (0:00:08.007) 0:00:31.075 ******* 2026-02-16 03:29:27.043938 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-16 03:29:27.043948 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-16 03:29:27.043958 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-16 03:29:27.043967 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-16 03:29:27.043977 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-16 03:29:27.043987 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-16 03:29:27.043996 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-16 03:29:27.044012 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-16 03:29:39.824259 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-16 03:29:39.824380 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-16 03:29:39.824406 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-16 03:29:39.824427 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-16 03:29:39.824448 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-16 03:29:39.824469 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-16 03:29:39.824488 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-16 03:29:39.824510 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-16 03:29:39.824523 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-16 03:29:39.824534 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-16 03:29:39.824545 | orchestrator | 2026-02-16 03:29:39.824558 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-16 03:29:39.824570 | orchestrator | Monday 16 February 2026 03:29:27 +0000 (0:00:06.291) 0:00:37.366 ******* 2026-02-16 03:29:39.824582 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-16 03:29:39.824593 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:29:39.824606 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-16 03:29:39.824617 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:29:39.824628 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-16 03:29:39.824639 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:29:39.824650 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-16 03:29:39.824661 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-16 03:29:39.824672 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-16 03:29:39.824682 | orchestrator | 2026-02-16 03:29:39.824694 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-16 03:29:39.824704 | orchestrator | Monday 16 February 2026 03:29:29 +0000 (0:00:02.308) 0:00:39.675 ******* 2026-02-16 03:29:39.824715 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-16 03:29:39.824761 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:29:39.824782 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-16 03:29:39.824802 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:29:39.824821 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-16 03:29:39.824841 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:29:39.824861 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-16 03:29:39.824880 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-16 03:29:39.824897 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-16 03:29:39.824910 | orchestrator | 2026-02-16 03:29:39.824922 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-16 03:29:39.824934 | orchestrator | Monday 16 February 2026 03:29:32 +0000 (0:00:02.947) 0:00:42.623 ******* 2026-02-16 03:29:39.824947 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:29:39.824983 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:29:39.824996 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:29:39.825009 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:29:39.825021 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:29:39.825033 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:29:39.825045 | orchestrator | 2026-02-16 03:29:39.825057 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:29:39.825071 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-16 03:29:39.825084 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-16 03:29:39.825159 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-16 03:29:39.825181 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-16 03:29:39.825199 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-16 03:29:39.825219 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-16 03:29:39.825237 | orchestrator | 2026-02-16 03:29:39.825256 | orchestrator | 2026-02-16 03:29:39.825270 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:29:39.825281 | orchestrator | Monday 16 February 2026 03:29:39 +0000 (0:00:07.175) 0:00:49.798 ******* 2026-02-16 03:29:39.825310 | orchestrator | =============================================================================== 2026-02-16 03:29:39.825321 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.18s 2026-02-16 03:29:39.825332 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.73s 2026-02-16 03:29:39.825343 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.29s 2026-02-16 03:29:39.825353 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 2.95s 2026-02-16 03:29:39.825364 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.31s 2026-02-16 03:29:39.825375 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.28s 2026-02-16 03:29:39.825385 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.61s 2026-02-16 03:29:39.825396 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.35s 2026-02-16 03:29:39.825407 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.32s 2026-02-16 03:29:39.825417 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.08s 2026-02-16 03:29:39.825440 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.03s 2026-02-16 03:29:39.825451 | orchestrator | module-load : Load modules ---------------------------------------------- 1.00s 2026-02-16 03:29:39.825462 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.93s 2026-02-16 03:29:39.825473 | orchestrator | openvswitch : include_tasks --------------------------------------------- 0.92s 2026-02-16 03:29:39.825486 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.88s 2026-02-16 03:29:39.825505 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.72s 2026-02-16 03:29:39.825523 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2026-02-16 03:29:39.825540 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.49s 2026-02-16 03:29:42.058314 | orchestrator | 2026-02-16 03:29:42 | INFO  | Task 5a21eb5c-df53-46b9-8106-f41fe871998a (ovn) was prepared for execution. 2026-02-16 03:29:42.058397 | orchestrator | 2026-02-16 03:29:42 | INFO  | It takes a moment until task 5a21eb5c-df53-46b9-8106-f41fe871998a (ovn) has been started and output is visible here. 2026-02-16 03:29:52.069349 | orchestrator | 2026-02-16 03:29:52.069463 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 03:29:52.069480 | orchestrator | 2026-02-16 03:29:52.069493 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 03:29:52.069505 | orchestrator | Monday 16 February 2026 03:29:45 +0000 (0:00:00.156) 0:00:00.156 ******* 2026-02-16 03:29:52.069516 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:29:52.069527 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:29:52.069538 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:29:52.069549 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:29:52.069559 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:29:52.069570 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:29:52.069581 | orchestrator | 2026-02-16 03:29:52.069592 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 03:29:52.069603 | orchestrator | Monday 16 February 2026 03:29:46 +0000 (0:00:00.642) 0:00:00.799 ******* 2026-02-16 03:29:52.069614 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-16 03:29:52.069625 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-16 03:29:52.069636 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-16 03:29:52.069647 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-16 03:29:52.069673 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-16 03:29:52.069684 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-16 03:29:52.069695 | orchestrator | 2026-02-16 03:29:52.069706 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-16 03:29:52.069717 | orchestrator | 2026-02-16 03:29:52.069728 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-16 03:29:52.069740 | orchestrator | Monday 16 February 2026 03:29:47 +0000 (0:00:00.737) 0:00:01.537 ******* 2026-02-16 03:29:52.069751 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:29:52.069764 | orchestrator | 2026-02-16 03:29:52.069804 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-16 03:29:52.069817 | orchestrator | Monday 16 February 2026 03:29:48 +0000 (0:00:01.045) 0:00:02.582 ******* 2026-02-16 03:29:52.069831 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:29:52.069867 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:29:52.069882 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:29:52.069895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:29:52.069908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:29:52.069940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:29:52.069953 | orchestrator | 2026-02-16 03:29:52.069966 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-16 03:29:52.069979 | orchestrator | Monday 16 February 2026 03:29:49 +0000 (0:00:01.098) 0:00:03.681 ******* 2026-02-16 03:29:52.069992 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:29:52.070011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:29:52.070086 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:29:52.070100 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:29:52.070151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:29:52.070165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:29:52.070178 | orchestrator | 2026-02-16 03:29:52.070191 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-16 03:29:52.070204 | orchestrator | Monday 16 February 2026 03:29:50 +0000 (0:00:01.466) 0:00:05.147 ******* 2026-02-16 03:29:52.070217 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:29:52.070230 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:29:52.070253 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:30:16.076741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:30:16.076903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:30:16.076923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:30:16.076959 | orchestrator | 2026-02-16 03:30:16.076974 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-16 03:30:16.076988 | orchestrator | Monday 16 February 2026 03:29:52 +0000 (0:00:01.095) 0:00:06.243 ******* 2026-02-16 03:30:16.076999 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:30:16.077011 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:30:16.077022 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:30:16.077034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:30:16.077045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:30:16.077074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:30:16.077087 | orchestrator | 2026-02-16 03:30:16.077099 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-16 03:30:16.077110 | orchestrator | Monday 16 February 2026 03:29:53 +0000 (0:00:01.537) 0:00:07.780 ******* 2026-02-16 03:30:16.077134 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:30:16.077205 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:30:16.077217 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:30:16.077228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:30:16.077244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:30:16.077262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:30:16.077282 | orchestrator | 2026-02-16 03:30:16.077300 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-16 03:30:16.077318 | orchestrator | Monday 16 February 2026 03:29:54 +0000 (0:00:01.289) 0:00:09.069 ******* 2026-02-16 03:30:16.077339 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:30:16.077361 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:30:16.077380 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:30:16.077402 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:30:16.077417 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:30:16.077430 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:30:16.077442 | orchestrator | 2026-02-16 03:30:16.077455 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-16 03:30:16.077468 | orchestrator | Monday 16 February 2026 03:29:57 +0000 (0:00:02.317) 0:00:11.387 ******* 2026-02-16 03:30:16.077480 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-16 03:30:16.077494 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-16 03:30:16.077506 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-16 03:30:16.077520 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-16 03:30:16.077533 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-16 03:30:16.077545 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-16 03:30:16.077574 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-16 03:30:49.257203 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-16 03:30:49.257278 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-16 03:30:49.257283 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-16 03:30:49.257288 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-16 03:30:49.257292 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-16 03:30:49.257307 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-16 03:30:49.257313 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-16 03:30:49.257317 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-16 03:30:49.257322 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-16 03:30:49.257325 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-16 03:30:49.257329 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-16 03:30:49.257334 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-16 03:30:49.257339 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-16 03:30:49.257343 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-16 03:30:49.257346 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-16 03:30:49.257350 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-16 03:30:49.257354 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-16 03:30:49.257358 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-16 03:30:49.257362 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-16 03:30:49.257366 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-16 03:30:49.257370 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-16 03:30:49.257374 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-16 03:30:49.257378 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-16 03:30:49.257381 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-16 03:30:49.257385 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-16 03:30:49.257389 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-16 03:30:49.257393 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-16 03:30:49.257397 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-16 03:30:49.257401 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-16 03:30:49.257418 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-16 03:30:49.257423 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-16 03:30:49.257428 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-16 03:30:49.257434 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-16 03:30:49.257440 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-16 03:30:49.257446 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-16 03:30:49.257452 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-16 03:30:49.257472 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-16 03:30:49.257478 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-16 03:30:49.257484 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-16 03:30:49.257490 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-16 03:30:49.257499 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-16 03:30:49.257506 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-16 03:30:49.257512 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-16 03:30:49.257516 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-16 03:30:49.257519 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-16 03:30:49.257523 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-16 03:30:49.257527 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-16 03:30:49.257531 | orchestrator | 2026-02-16 03:30:49.257536 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-16 03:30:49.257540 | orchestrator | Monday 16 February 2026 03:30:15 +0000 (0:00:18.303) 0:00:29.690 ******* 2026-02-16 03:30:49.257543 | orchestrator | 2026-02-16 03:30:49.257547 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-16 03:30:49.257551 | orchestrator | Monday 16 February 2026 03:30:15 +0000 (0:00:00.231) 0:00:29.921 ******* 2026-02-16 03:30:49.257555 | orchestrator | 2026-02-16 03:30:49.257558 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-16 03:30:49.257562 | orchestrator | Monday 16 February 2026 03:30:15 +0000 (0:00:00.064) 0:00:29.986 ******* 2026-02-16 03:30:49.257566 | orchestrator | 2026-02-16 03:30:49.257569 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-16 03:30:49.257573 | orchestrator | Monday 16 February 2026 03:30:15 +0000 (0:00:00.062) 0:00:30.048 ******* 2026-02-16 03:30:49.257577 | orchestrator | 2026-02-16 03:30:49.257581 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-16 03:30:49.257584 | orchestrator | Monday 16 February 2026 03:30:15 +0000 (0:00:00.061) 0:00:30.109 ******* 2026-02-16 03:30:49.257592 | orchestrator | 2026-02-16 03:30:49.257596 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-16 03:30:49.257599 | orchestrator | Monday 16 February 2026 03:30:15 +0000 (0:00:00.063) 0:00:30.172 ******* 2026-02-16 03:30:49.257603 | orchestrator | 2026-02-16 03:30:49.257607 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-16 03:30:49.257611 | orchestrator | Monday 16 February 2026 03:30:16 +0000 (0:00:00.063) 0:00:30.235 ******* 2026-02-16 03:30:49.257614 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:30:49.257619 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:30:49.257623 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:30:49.257627 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:30:49.257630 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:30:49.257634 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:30:49.257638 | orchestrator | 2026-02-16 03:30:49.257641 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-16 03:30:49.257645 | orchestrator | Monday 16 February 2026 03:30:17 +0000 (0:00:01.485) 0:00:31.721 ******* 2026-02-16 03:30:49.257649 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:30:49.257653 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:30:49.257657 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:30:49.257660 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:30:49.257664 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:30:49.257668 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:30:49.257671 | orchestrator | 2026-02-16 03:30:49.257675 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-16 03:30:49.257679 | orchestrator | 2026-02-16 03:30:49.257682 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-16 03:30:49.257686 | orchestrator | Monday 16 February 2026 03:30:47 +0000 (0:00:29.603) 0:01:01.324 ******* 2026-02-16 03:30:49.257690 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:30:49.257694 | orchestrator | 2026-02-16 03:30:49.257697 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-16 03:30:49.257701 | orchestrator | Monday 16 February 2026 03:30:47 +0000 (0:00:00.661) 0:01:01.985 ******* 2026-02-16 03:30:49.257705 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:30:49.257709 | orchestrator | 2026-02-16 03:30:49.257712 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-16 03:30:49.257716 | orchestrator | Monday 16 February 2026 03:30:48 +0000 (0:00:00.535) 0:01:02.521 ******* 2026-02-16 03:30:49.257720 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:30:49.257723 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:30:49.257727 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:30:49.257731 | orchestrator | 2026-02-16 03:30:49.257735 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-16 03:30:49.257741 | orchestrator | Monday 16 February 2026 03:30:49 +0000 (0:00:00.899) 0:01:03.420 ******* 2026-02-16 03:30:59.659040 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:30:59.659267 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:30:59.659299 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:30:59.659314 | orchestrator | 2026-02-16 03:30:59.659328 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-16 03:30:59.659341 | orchestrator | Monday 16 February 2026 03:30:49 +0000 (0:00:00.301) 0:01:03.721 ******* 2026-02-16 03:30:59.659352 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:30:59.659363 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:30:59.659374 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:30:59.659385 | orchestrator | 2026-02-16 03:30:59.659397 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-16 03:30:59.659429 | orchestrator | Monday 16 February 2026 03:30:49 +0000 (0:00:00.315) 0:01:04.037 ******* 2026-02-16 03:30:59.659441 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:30:59.659451 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:30:59.659497 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:30:59.659508 | orchestrator | 2026-02-16 03:30:59.659519 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-16 03:30:59.659530 | orchestrator | Monday 16 February 2026 03:30:50 +0000 (0:00:00.292) 0:01:04.330 ******* 2026-02-16 03:30:59.659541 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:30:59.659552 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:30:59.659562 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:30:59.659573 | orchestrator | 2026-02-16 03:30:59.659584 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-16 03:30:59.659595 | orchestrator | Monday 16 February 2026 03:30:50 +0000 (0:00:00.494) 0:01:04.824 ******* 2026-02-16 03:30:59.659606 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:30:59.659619 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:30:59.659630 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:30:59.659640 | orchestrator | 2026-02-16 03:30:59.659651 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-16 03:30:59.659662 | orchestrator | Monday 16 February 2026 03:30:50 +0000 (0:00:00.281) 0:01:05.106 ******* 2026-02-16 03:30:59.659673 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:30:59.659684 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:30:59.659694 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:30:59.659705 | orchestrator | 2026-02-16 03:30:59.659716 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-16 03:30:59.659727 | orchestrator | Monday 16 February 2026 03:30:51 +0000 (0:00:00.290) 0:01:05.397 ******* 2026-02-16 03:30:59.659737 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:30:59.659748 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:30:59.659759 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:30:59.659770 | orchestrator | 2026-02-16 03:30:59.659781 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-16 03:30:59.659792 | orchestrator | Monday 16 February 2026 03:30:51 +0000 (0:00:00.286) 0:01:05.683 ******* 2026-02-16 03:30:59.659802 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:30:59.659813 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:30:59.659824 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:30:59.659835 | orchestrator | 2026-02-16 03:30:59.659846 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-16 03:30:59.659857 | orchestrator | Monday 16 February 2026 03:30:51 +0000 (0:00:00.300) 0:01:05.984 ******* 2026-02-16 03:30:59.659868 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:30:59.659879 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:30:59.659889 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:30:59.659900 | orchestrator | 2026-02-16 03:30:59.659911 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-16 03:30:59.659922 | orchestrator | Monday 16 February 2026 03:30:52 +0000 (0:00:00.473) 0:01:06.457 ******* 2026-02-16 03:30:59.659933 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:30:59.659944 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:30:59.659955 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:30:59.659966 | orchestrator | 2026-02-16 03:30:59.659976 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-16 03:30:59.659987 | orchestrator | Monday 16 February 2026 03:30:52 +0000 (0:00:00.272) 0:01:06.729 ******* 2026-02-16 03:30:59.659998 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:30:59.660008 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:30:59.660019 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:30:59.660030 | orchestrator | 2026-02-16 03:30:59.660041 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-16 03:30:59.660051 | orchestrator | Monday 16 February 2026 03:30:52 +0000 (0:00:00.284) 0:01:07.014 ******* 2026-02-16 03:30:59.660062 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:30:59.660073 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:30:59.660084 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:30:59.660103 | orchestrator | 2026-02-16 03:30:59.660114 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-16 03:30:59.660125 | orchestrator | Monday 16 February 2026 03:30:53 +0000 (0:00:00.277) 0:01:07.291 ******* 2026-02-16 03:30:59.660141 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:30:59.660160 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:30:59.660177 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:30:59.660219 | orchestrator | 2026-02-16 03:30:59.660240 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-16 03:30:59.660257 | orchestrator | Monday 16 February 2026 03:30:53 +0000 (0:00:00.449) 0:01:07.741 ******* 2026-02-16 03:30:59.660272 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:30:59.660284 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:30:59.660294 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:30:59.660305 | orchestrator | 2026-02-16 03:30:59.660316 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-16 03:30:59.660327 | orchestrator | Monday 16 February 2026 03:30:53 +0000 (0:00:00.269) 0:01:08.011 ******* 2026-02-16 03:30:59.660338 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:30:59.660348 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:30:59.660359 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:30:59.660370 | orchestrator | 2026-02-16 03:30:59.660381 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-16 03:30:59.660392 | orchestrator | Monday 16 February 2026 03:30:54 +0000 (0:00:00.267) 0:01:08.278 ******* 2026-02-16 03:30:59.660424 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:30:59.660436 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:30:59.660447 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:30:59.660458 | orchestrator | 2026-02-16 03:30:59.660469 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-16 03:30:59.660479 | orchestrator | Monday 16 February 2026 03:30:54 +0000 (0:00:00.310) 0:01:08.589 ******* 2026-02-16 03:30:59.660491 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:30:59.660502 | orchestrator | 2026-02-16 03:30:59.660513 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-16 03:30:59.660525 | orchestrator | Monday 16 February 2026 03:30:55 +0000 (0:00:00.715) 0:01:09.305 ******* 2026-02-16 03:30:59.660535 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:30:59.660546 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:30:59.660557 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:30:59.660567 | orchestrator | 2026-02-16 03:30:59.660578 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-16 03:30:59.660589 | orchestrator | Monday 16 February 2026 03:30:55 +0000 (0:00:00.428) 0:01:09.733 ******* 2026-02-16 03:30:59.660600 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:30:59.660611 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:30:59.660621 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:30:59.660632 | orchestrator | 2026-02-16 03:30:59.660643 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-16 03:30:59.660654 | orchestrator | Monday 16 February 2026 03:30:55 +0000 (0:00:00.404) 0:01:10.138 ******* 2026-02-16 03:30:59.660665 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:30:59.660675 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:30:59.660686 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:30:59.660697 | orchestrator | 2026-02-16 03:30:59.660708 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-16 03:30:59.660791 | orchestrator | Monday 16 February 2026 03:30:56 +0000 (0:00:00.490) 0:01:10.628 ******* 2026-02-16 03:30:59.660805 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:30:59.660816 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:30:59.660827 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:30:59.660837 | orchestrator | 2026-02-16 03:30:59.660849 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-16 03:30:59.660869 | orchestrator | Monday 16 February 2026 03:30:56 +0000 (0:00:00.314) 0:01:10.943 ******* 2026-02-16 03:30:59.660881 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:30:59.660892 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:30:59.660902 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:30:59.660913 | orchestrator | 2026-02-16 03:30:59.660924 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-16 03:30:59.660935 | orchestrator | Monday 16 February 2026 03:30:57 +0000 (0:00:00.324) 0:01:11.268 ******* 2026-02-16 03:30:59.660946 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:30:59.660957 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:30:59.660968 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:30:59.660978 | orchestrator | 2026-02-16 03:30:59.660989 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-16 03:30:59.661000 | orchestrator | Monday 16 February 2026 03:30:57 +0000 (0:00:00.337) 0:01:11.605 ******* 2026-02-16 03:30:59.661011 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:30:59.661022 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:30:59.661033 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:30:59.661044 | orchestrator | 2026-02-16 03:30:59.661054 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-16 03:30:59.661065 | orchestrator | Monday 16 February 2026 03:30:57 +0000 (0:00:00.310) 0:01:11.916 ******* 2026-02-16 03:30:59.661076 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:30:59.661087 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:30:59.661098 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:30:59.661108 | orchestrator | 2026-02-16 03:30:59.661119 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-16 03:30:59.661132 | orchestrator | Monday 16 February 2026 03:30:58 +0000 (0:00:00.502) 0:01:12.418 ******* 2026-02-16 03:30:59.661156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:30:59.661179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:30:59.661235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:30:59.661280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:05.868786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:05.868948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:05.868978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:05.868997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:05.869015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:05.869031 | orchestrator | 2026-02-16 03:31:05.869053 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-16 03:31:05.869072 | orchestrator | Monday 16 February 2026 03:30:59 +0000 (0:00:01.405) 0:01:13.824 ******* 2026-02-16 03:31:05.869092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:05.869113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:05.869132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:05.869152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:05.869187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:05.869270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:05.869284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:05.869295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:05.869308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:05.869320 | orchestrator | 2026-02-16 03:31:05.869331 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-16 03:31:05.869344 | orchestrator | Monday 16 February 2026 03:31:03 +0000 (0:00:03.749) 0:01:17.573 ******* 2026-02-16 03:31:05.869365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:05.869399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:05.869419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:05.869439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:05.869457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:05.869513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:35.230146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:35.230272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:35.230282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:35.230287 | orchestrator | 2026-02-16 03:31:35.230292 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-16 03:31:35.230297 | orchestrator | Monday 16 February 2026 03:31:05 +0000 (0:00:02.063) 0:01:19.637 ******* 2026-02-16 03:31:35.230301 | orchestrator | 2026-02-16 03:31:35.230305 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-16 03:31:35.230309 | orchestrator | Monday 16 February 2026 03:31:05 +0000 (0:00:00.064) 0:01:19.701 ******* 2026-02-16 03:31:35.230312 | orchestrator | 2026-02-16 03:31:35.230317 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-16 03:31:35.230370 | orchestrator | Monday 16 February 2026 03:31:05 +0000 (0:00:00.268) 0:01:19.970 ******* 2026-02-16 03:31:35.230375 | orchestrator | 2026-02-16 03:31:35.230380 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-16 03:31:35.230383 | orchestrator | Monday 16 February 2026 03:31:05 +0000 (0:00:00.063) 0:01:20.033 ******* 2026-02-16 03:31:35.230388 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:31:35.230393 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:31:35.230397 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:31:35.230401 | orchestrator | 2026-02-16 03:31:35.230405 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-16 03:31:35.230408 | orchestrator | Monday 16 February 2026 03:31:13 +0000 (0:00:07.340) 0:01:27.374 ******* 2026-02-16 03:31:35.230412 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:31:35.230416 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:31:35.230420 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:31:35.230423 | orchestrator | 2026-02-16 03:31:35.230427 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-16 03:31:35.230431 | orchestrator | Monday 16 February 2026 03:31:21 +0000 (0:00:07.850) 0:01:35.224 ******* 2026-02-16 03:31:35.230435 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:31:35.230439 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:31:35.230442 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:31:35.230460 | orchestrator | 2026-02-16 03:31:35.230464 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-16 03:31:35.230468 | orchestrator | Monday 16 February 2026 03:31:28 +0000 (0:00:07.429) 0:01:42.653 ******* 2026-02-16 03:31:35.230471 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:31:35.230475 | orchestrator | 2026-02-16 03:31:35.230479 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-16 03:31:35.230483 | orchestrator | Monday 16 February 2026 03:31:28 +0000 (0:00:00.137) 0:01:42.790 ******* 2026-02-16 03:31:35.230487 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:31:35.230491 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:31:35.230495 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:31:35.230499 | orchestrator | 2026-02-16 03:31:35.230502 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-16 03:31:35.230506 | orchestrator | Monday 16 February 2026 03:31:29 +0000 (0:00:00.993) 0:01:43.784 ******* 2026-02-16 03:31:35.230510 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:31:35.230514 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:31:35.230518 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:31:35.230522 | orchestrator | 2026-02-16 03:31:35.230525 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-16 03:31:35.230529 | orchestrator | Monday 16 February 2026 03:31:30 +0000 (0:00:00.603) 0:01:44.388 ******* 2026-02-16 03:31:35.230533 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:31:35.230537 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:31:35.230540 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:31:35.230544 | orchestrator | 2026-02-16 03:31:35.230548 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-16 03:31:35.230552 | orchestrator | Monday 16 February 2026 03:31:30 +0000 (0:00:00.741) 0:01:45.129 ******* 2026-02-16 03:31:35.230556 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:31:35.230559 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:31:35.230563 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:31:35.230567 | orchestrator | 2026-02-16 03:31:35.230571 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-16 03:31:35.230584 | orchestrator | Monday 16 February 2026 03:31:31 +0000 (0:00:00.612) 0:01:45.741 ******* 2026-02-16 03:31:35.230588 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:31:35.230591 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:31:35.230605 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:31:35.230610 | orchestrator | 2026-02-16 03:31:35.230613 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-16 03:31:35.230617 | orchestrator | Monday 16 February 2026 03:31:32 +0000 (0:00:01.148) 0:01:46.890 ******* 2026-02-16 03:31:35.230621 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:31:35.230625 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:31:35.230629 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:31:35.230632 | orchestrator | 2026-02-16 03:31:35.230636 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-16 03:31:35.230640 | orchestrator | Monday 16 February 2026 03:31:33 +0000 (0:00:00.727) 0:01:47.617 ******* 2026-02-16 03:31:35.230644 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:31:35.230648 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:31:35.230651 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:31:35.230655 | orchestrator | 2026-02-16 03:31:35.230659 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-16 03:31:35.230663 | orchestrator | Monday 16 February 2026 03:31:33 +0000 (0:00:00.380) 0:01:47.998 ******* 2026-02-16 03:31:35.230668 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:35.230674 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:35.230682 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:35.230686 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:35.230690 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:35.230694 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:35.230698 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:35.230702 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:35.230713 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:42.109596 | orchestrator | 2026-02-16 03:31:42.109705 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-16 03:31:42.109723 | orchestrator | Monday 16 February 2026 03:31:35 +0000 (0:00:01.397) 0:01:49.396 ******* 2026-02-16 03:31:42.109738 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:42.109773 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:42.109786 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:42.109798 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:42.109811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:42.109823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:42.109834 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:42.109845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:42.109870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:42.109882 | orchestrator | 2026-02-16 03:31:42.109894 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-16 03:31:42.109905 | orchestrator | Monday 16 February 2026 03:31:38 +0000 (0:00:03.738) 0:01:53.134 ******* 2026-02-16 03:31:42.109934 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:42.109955 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:42.109966 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:42.109978 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:42.109989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:42.110001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:42.110074 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:42.110087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:42.110098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 03:31:42.110109 | orchestrator | 2026-02-16 03:31:42.110121 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-16 03:31:42.110132 | orchestrator | Monday 16 February 2026 03:31:41 +0000 (0:00:02.924) 0:01:56.059 ******* 2026-02-16 03:31:42.110144 | orchestrator | 2026-02-16 03:31:42.110163 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-16 03:31:42.110177 | orchestrator | Monday 16 February 2026 03:31:41 +0000 (0:00:00.067) 0:01:56.126 ******* 2026-02-16 03:31:42.110197 | orchestrator | 2026-02-16 03:31:42.110210 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-16 03:31:42.110223 | orchestrator | Monday 16 February 2026 03:31:42 +0000 (0:00:00.063) 0:01:56.190 ******* 2026-02-16 03:31:42.110235 | orchestrator | 2026-02-16 03:31:42.110285 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-16 03:32:05.855552 | orchestrator | Monday 16 February 2026 03:31:42 +0000 (0:00:00.068) 0:01:56.258 ******* 2026-02-16 03:32:05.855668 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:32:05.855684 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:32:05.855697 | orchestrator | 2026-02-16 03:32:05.855710 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-16 03:32:05.855722 | orchestrator | Monday 16 February 2026 03:31:48 +0000 (0:00:06.134) 0:02:02.392 ******* 2026-02-16 03:32:05.855733 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:32:05.855744 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:32:05.855755 | orchestrator | 2026-02-16 03:32:05.855766 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-16 03:32:05.855777 | orchestrator | Monday 16 February 2026 03:31:54 +0000 (0:00:06.122) 0:02:08.515 ******* 2026-02-16 03:32:05.855788 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:32:05.855800 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:32:05.855810 | orchestrator | 2026-02-16 03:32:05.855821 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-16 03:32:05.855832 | orchestrator | Monday 16 February 2026 03:32:00 +0000 (0:00:06.171) 0:02:14.686 ******* 2026-02-16 03:32:05.855843 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:32:05.855854 | orchestrator | 2026-02-16 03:32:05.855865 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-16 03:32:05.855876 | orchestrator | Monday 16 February 2026 03:32:00 +0000 (0:00:00.124) 0:02:14.811 ******* 2026-02-16 03:32:05.855887 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:32:05.855899 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:32:05.855910 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:32:05.855921 | orchestrator | 2026-02-16 03:32:05.855932 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-16 03:32:05.855943 | orchestrator | Monday 16 February 2026 03:32:01 +0000 (0:00:00.996) 0:02:15.808 ******* 2026-02-16 03:32:05.855954 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:32:05.855965 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:32:05.855975 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:32:05.855986 | orchestrator | 2026-02-16 03:32:05.855997 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-16 03:32:05.856008 | orchestrator | Monday 16 February 2026 03:32:02 +0000 (0:00:00.607) 0:02:16.415 ******* 2026-02-16 03:32:05.856019 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:32:05.856030 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:32:05.856041 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:32:05.856052 | orchestrator | 2026-02-16 03:32:05.856064 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-16 03:32:05.856074 | orchestrator | Monday 16 February 2026 03:32:03 +0000 (0:00:00.799) 0:02:17.214 ******* 2026-02-16 03:32:05.856088 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:32:05.856101 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:32:05.856114 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:32:05.856127 | orchestrator | 2026-02-16 03:32:05.856140 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-16 03:32:05.856153 | orchestrator | Monday 16 February 2026 03:32:03 +0000 (0:00:00.631) 0:02:17.846 ******* 2026-02-16 03:32:05.856166 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:32:05.856179 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:32:05.856192 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:32:05.856205 | orchestrator | 2026-02-16 03:32:05.856218 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-16 03:32:05.856231 | orchestrator | Monday 16 February 2026 03:32:04 +0000 (0:00:00.946) 0:02:18.792 ******* 2026-02-16 03:32:05.856269 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:32:05.856308 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:32:05.856320 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:32:05.856333 | orchestrator | 2026-02-16 03:32:05.856347 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:32:05.856361 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-16 03:32:05.856376 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-16 03:32:05.856388 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-16 03:32:05.856401 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 03:32:05.856426 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 03:32:05.856439 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 03:32:05.856451 | orchestrator | 2026-02-16 03:32:05.856462 | orchestrator | 2026-02-16 03:32:05.856474 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:32:05.856485 | orchestrator | Monday 16 February 2026 03:32:05 +0000 (0:00:00.864) 0:02:19.656 ******* 2026-02-16 03:32:05.856504 | orchestrator | =============================================================================== 2026-02-16 03:32:05.856531 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 29.60s 2026-02-16 03:32:05.856542 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.30s 2026-02-16 03:32:05.856553 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.97s 2026-02-16 03:32:05.856564 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.60s 2026-02-16 03:32:05.856575 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.47s 2026-02-16 03:32:05.856605 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.75s 2026-02-16 03:32:05.856616 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.74s 2026-02-16 03:32:05.856627 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.92s 2026-02-16 03:32:05.856638 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.32s 2026-02-16 03:32:05.856649 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.06s 2026-02-16 03:32:05.856660 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.54s 2026-02-16 03:32:05.856671 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.49s 2026-02-16 03:32:05.856682 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.47s 2026-02-16 03:32:05.856692 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.41s 2026-02-16 03:32:05.856703 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.40s 2026-02-16 03:32:05.856714 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.29s 2026-02-16 03:32:05.856725 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.15s 2026-02-16 03:32:05.856736 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.10s 2026-02-16 03:32:05.856747 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.09s 2026-02-16 03:32:05.856758 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.05s 2026-02-16 03:32:06.126101 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-16 03:32:06.126202 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-02-16 03:32:08.205406 | orchestrator | 2026-02-16 03:32:08 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-16 03:32:18.326784 | orchestrator | 2026-02-16 03:32:18 | INFO  | Task 9df6d990-8558-4e54-bccd-b178722c978f (wipe-partitions) was prepared for execution. 2026-02-16 03:32:18.326896 | orchestrator | 2026-02-16 03:32:18 | INFO  | It takes a moment until task 9df6d990-8558-4e54-bccd-b178722c978f (wipe-partitions) has been started and output is visible here. 2026-02-16 03:32:30.831992 | orchestrator | 2026-02-16 03:32:30.832103 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-16 03:32:30.832121 | orchestrator | 2026-02-16 03:32:30.832133 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-16 03:32:30.832145 | orchestrator | Monday 16 February 2026 03:32:22 +0000 (0:00:00.127) 0:00:00.127 ******* 2026-02-16 03:32:30.832156 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:32:30.832168 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:32:30.832180 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:32:30.832190 | orchestrator | 2026-02-16 03:32:30.832201 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-16 03:32:30.832213 | orchestrator | Monday 16 February 2026 03:32:23 +0000 (0:00:00.609) 0:00:00.737 ******* 2026-02-16 03:32:30.832223 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:32:30.832234 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:32:30.832245 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:32:30.832256 | orchestrator | 2026-02-16 03:32:30.832267 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-16 03:32:30.832278 | orchestrator | Monday 16 February 2026 03:32:23 +0000 (0:00:00.382) 0:00:01.119 ******* 2026-02-16 03:32:30.832289 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:32:30.832301 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:32:30.832346 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:32:30.832365 | orchestrator | 2026-02-16 03:32:30.832383 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-16 03:32:30.832403 | orchestrator | Monday 16 February 2026 03:32:24 +0000 (0:00:00.587) 0:00:01.706 ******* 2026-02-16 03:32:30.832420 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:32:30.832440 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:32:30.832454 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:32:30.832464 | orchestrator | 2026-02-16 03:32:30.832475 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-16 03:32:30.832487 | orchestrator | Monday 16 February 2026 03:32:24 +0000 (0:00:00.249) 0:00:01.955 ******* 2026-02-16 03:32:30.832501 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-16 03:32:30.832514 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-16 03:32:30.832525 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-16 03:32:30.832537 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-16 03:32:30.832549 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-16 03:32:30.832561 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-16 03:32:30.832573 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-16 03:32:30.832587 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-16 03:32:30.832598 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-16 03:32:30.832611 | orchestrator | 2026-02-16 03:32:30.832622 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-16 03:32:30.832653 | orchestrator | Monday 16 February 2026 03:32:25 +0000 (0:00:01.195) 0:00:03.151 ******* 2026-02-16 03:32:30.832665 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-16 03:32:30.832678 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-16 03:32:30.832689 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-16 03:32:30.832723 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-16 03:32:30.832736 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-16 03:32:30.832748 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-16 03:32:30.832760 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-16 03:32:30.832772 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-16 03:32:30.832784 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-16 03:32:30.832795 | orchestrator | 2026-02-16 03:32:30.832807 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-16 03:32:30.832819 | orchestrator | Monday 16 February 2026 03:32:27 +0000 (0:00:01.580) 0:00:04.732 ******* 2026-02-16 03:32:30.832831 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-16 03:32:30.832844 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-16 03:32:30.832856 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-16 03:32:30.832868 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-16 03:32:30.832880 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-16 03:32:30.832891 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-16 03:32:30.832902 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-16 03:32:30.832913 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-16 03:32:30.832923 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-16 03:32:30.832934 | orchestrator | 2026-02-16 03:32:30.832944 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-16 03:32:30.832955 | orchestrator | Monday 16 February 2026 03:32:29 +0000 (0:00:02.152) 0:00:06.884 ******* 2026-02-16 03:32:30.832966 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:32:30.832976 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:32:30.832987 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:32:30.832997 | orchestrator | 2026-02-16 03:32:30.833008 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-16 03:32:30.833018 | orchestrator | Monday 16 February 2026 03:32:29 +0000 (0:00:00.592) 0:00:07.477 ******* 2026-02-16 03:32:30.833029 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:32:30.833040 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:32:30.833050 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:32:30.833061 | orchestrator | 2026-02-16 03:32:30.833071 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:32:30.833083 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:32:30.833095 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:32:30.833123 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:32:30.833135 | orchestrator | 2026-02-16 03:32:30.833146 | orchestrator | 2026-02-16 03:32:30.833158 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:32:30.833168 | orchestrator | Monday 16 February 2026 03:32:30 +0000 (0:00:00.630) 0:00:08.108 ******* 2026-02-16 03:32:30.833179 | orchestrator | =============================================================================== 2026-02-16 03:32:30.833190 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.15s 2026-02-16 03:32:30.833201 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.58s 2026-02-16 03:32:30.833212 | orchestrator | Check device availability ----------------------------------------------- 1.20s 2026-02-16 03:32:30.833223 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2026-02-16 03:32:30.833233 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.61s 2026-02-16 03:32:30.833244 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2026-02-16 03:32:30.833262 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.59s 2026-02-16 03:32:30.833273 | orchestrator | Remove all rook related logical devices --------------------------------- 0.38s 2026-02-16 03:32:30.833284 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2026-02-16 03:32:43.314654 | orchestrator | 2026-02-16 03:32:43 | INFO  | Task a2f9ef27-73ab-45d7-a8a5-1855adc50bf6 (facts) was prepared for execution. 2026-02-16 03:32:43.314764 | orchestrator | 2026-02-16 03:32:43 | INFO  | It takes a moment until task a2f9ef27-73ab-45d7-a8a5-1855adc50bf6 (facts) has been started and output is visible here. 2026-02-16 03:32:56.185779 | orchestrator | 2026-02-16 03:32:56.185913 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-16 03:32:56.185938 | orchestrator | 2026-02-16 03:32:56.185956 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-16 03:32:56.185973 | orchestrator | Monday 16 February 2026 03:32:47 +0000 (0:00:00.262) 0:00:00.262 ******* 2026-02-16 03:32:56.185990 | orchestrator | ok: [testbed-manager] 2026-02-16 03:32:56.186007 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:32:56.186095 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:32:56.186113 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:32:56.186130 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:32:56.186147 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:32:56.186163 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:32:56.186179 | orchestrator | 2026-02-16 03:32:56.186195 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-16 03:32:56.186232 | orchestrator | Monday 16 February 2026 03:32:48 +0000 (0:00:01.091) 0:00:01.354 ******* 2026-02-16 03:32:56.186252 | orchestrator | skipping: [testbed-manager] 2026-02-16 03:32:56.186271 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:32:56.186290 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:32:56.186309 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:32:56.186329 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:32:56.186376 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:32:56.186392 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:32:56.186407 | orchestrator | 2026-02-16 03:32:56.186424 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-16 03:32:56.186444 | orchestrator | 2026-02-16 03:32:56.186463 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-16 03:32:56.186482 | orchestrator | Monday 16 February 2026 03:32:49 +0000 (0:00:01.239) 0:00:02.593 ******* 2026-02-16 03:32:56.186503 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:32:56.186523 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:32:56.186542 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:32:56.186558 | orchestrator | ok: [testbed-manager] 2026-02-16 03:32:56.186575 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:32:56.186591 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:32:56.186607 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:32:56.186624 | orchestrator | 2026-02-16 03:32:56.186640 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-16 03:32:56.186656 | orchestrator | 2026-02-16 03:32:56.186671 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-16 03:32:56.186687 | orchestrator | Monday 16 February 2026 03:32:55 +0000 (0:00:05.294) 0:00:07.887 ******* 2026-02-16 03:32:56.186703 | orchestrator | skipping: [testbed-manager] 2026-02-16 03:32:56.186719 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:32:56.186735 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:32:56.186750 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:32:56.186766 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:32:56.186781 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:32:56.186796 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:32:56.186811 | orchestrator | 2026-02-16 03:32:56.186827 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:32:56.186873 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:32:56.186890 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:32:56.186905 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:32:56.186920 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:32:56.186936 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:32:56.186951 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:32:56.186965 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:32:56.186980 | orchestrator | 2026-02-16 03:32:56.186995 | orchestrator | 2026-02-16 03:32:56.187010 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:32:56.187025 | orchestrator | Monday 16 February 2026 03:32:55 +0000 (0:00:00.547) 0:00:08.434 ******* 2026-02-16 03:32:56.187040 | orchestrator | =============================================================================== 2026-02-16 03:32:56.187054 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.29s 2026-02-16 03:32:56.187070 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2026-02-16 03:32:56.187084 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.09s 2026-02-16 03:32:56.187099 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-02-16 03:32:58.534938 | orchestrator | 2026-02-16 03:32:58 | INFO  | Task 31876b63-f23b-4812-9f07-f0c5d430053f (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-16 03:32:58.535049 | orchestrator | 2026-02-16 03:32:58 | INFO  | It takes a moment until task 31876b63-f23b-4812-9f07-f0c5d430053f (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-16 03:33:10.096025 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-16 03:33:10.096116 | orchestrator | 2.16.14 2026-02-16 03:33:10.096128 | orchestrator | 2026-02-16 03:33:10.096138 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-16 03:33:10.096147 | orchestrator | 2026-02-16 03:33:10.096154 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-16 03:33:10.096162 | orchestrator | Monday 16 February 2026 03:33:02 +0000 (0:00:00.317) 0:00:00.317 ******* 2026-02-16 03:33:10.096177 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-16 03:33:10.096190 | orchestrator | 2026-02-16 03:33:10.096202 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-16 03:33:10.096215 | orchestrator | Monday 16 February 2026 03:33:03 +0000 (0:00:00.246) 0:00:00.563 ******* 2026-02-16 03:33:10.096227 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:33:10.096239 | orchestrator | 2026-02-16 03:33:10.096251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:10.096279 | orchestrator | Monday 16 February 2026 03:33:03 +0000 (0:00:00.232) 0:00:00.797 ******* 2026-02-16 03:33:10.096290 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-16 03:33:10.096300 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-16 03:33:10.096311 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-16 03:33:10.096343 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-16 03:33:10.096403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-16 03:33:10.096416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-16 03:33:10.096426 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-16 03:33:10.096436 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-16 03:33:10.096447 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-16 03:33:10.096457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-16 03:33:10.096467 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-16 03:33:10.096477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-16 03:33:10.096487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-16 03:33:10.096497 | orchestrator | 2026-02-16 03:33:10.096507 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:10.096517 | orchestrator | Monday 16 February 2026 03:33:03 +0000 (0:00:00.530) 0:00:01.327 ******* 2026-02-16 03:33:10.096528 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:10.096538 | orchestrator | 2026-02-16 03:33:10.096558 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:10.096570 | orchestrator | Monday 16 February 2026 03:33:04 +0000 (0:00:00.189) 0:00:01.516 ******* 2026-02-16 03:33:10.096581 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:10.096594 | orchestrator | 2026-02-16 03:33:10.096605 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:10.096616 | orchestrator | Monday 16 February 2026 03:33:04 +0000 (0:00:00.181) 0:00:01.698 ******* 2026-02-16 03:33:10.096628 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:10.096639 | orchestrator | 2026-02-16 03:33:10.096651 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:10.096663 | orchestrator | Monday 16 February 2026 03:33:04 +0000 (0:00:00.194) 0:00:01.893 ******* 2026-02-16 03:33:10.096675 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:10.096687 | orchestrator | 2026-02-16 03:33:10.096698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:10.096710 | orchestrator | Monday 16 February 2026 03:33:04 +0000 (0:00:00.187) 0:00:02.080 ******* 2026-02-16 03:33:10.096720 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:10.096732 | orchestrator | 2026-02-16 03:33:10.096744 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:10.096756 | orchestrator | Monday 16 February 2026 03:33:04 +0000 (0:00:00.196) 0:00:02.276 ******* 2026-02-16 03:33:10.096768 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:10.096781 | orchestrator | 2026-02-16 03:33:10.096794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:10.096806 | orchestrator | Monday 16 February 2026 03:33:05 +0000 (0:00:00.194) 0:00:02.470 ******* 2026-02-16 03:33:10.096819 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:10.096829 | orchestrator | 2026-02-16 03:33:10.096838 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:10.096846 | orchestrator | Monday 16 February 2026 03:33:05 +0000 (0:00:00.205) 0:00:02.676 ******* 2026-02-16 03:33:10.096855 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:10.096863 | orchestrator | 2026-02-16 03:33:10.096872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:10.096880 | orchestrator | Monday 16 February 2026 03:33:05 +0000 (0:00:00.198) 0:00:02.874 ******* 2026-02-16 03:33:10.096889 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5) 2026-02-16 03:33:10.096908 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5) 2026-02-16 03:33:10.096917 | orchestrator | 2026-02-16 03:33:10.096926 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:10.096952 | orchestrator | Monday 16 February 2026 03:33:05 +0000 (0:00:00.419) 0:00:03.293 ******* 2026-02-16 03:33:10.096961 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0693774e-e893-4e7b-949f-071f2326db51) 2026-02-16 03:33:10.096970 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0693774e-e893-4e7b-949f-071f2326db51) 2026-02-16 03:33:10.096978 | orchestrator | 2026-02-16 03:33:10.096986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:10.096994 | orchestrator | Monday 16 February 2026 03:33:06 +0000 (0:00:00.598) 0:00:03.892 ******* 2026-02-16 03:33:10.097001 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_51f5f49d-415a-48de-982e-531dff143e5e) 2026-02-16 03:33:10.097008 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_51f5f49d-415a-48de-982e-531dff143e5e) 2026-02-16 03:33:10.097016 | orchestrator | 2026-02-16 03:33:10.097023 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:10.097037 | orchestrator | Monday 16 February 2026 03:33:07 +0000 (0:00:00.609) 0:00:04.501 ******* 2026-02-16 03:33:10.097044 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_843bc551-f5ad-4319-82ad-d411f9295fd2) 2026-02-16 03:33:10.097051 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_843bc551-f5ad-4319-82ad-d411f9295fd2) 2026-02-16 03:33:10.097059 | orchestrator | 2026-02-16 03:33:10.097066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:10.097073 | orchestrator | Monday 16 February 2026 03:33:07 +0000 (0:00:00.819) 0:00:05.320 ******* 2026-02-16 03:33:10.097080 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-16 03:33:10.097088 | orchestrator | 2026-02-16 03:33:10.097095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:10.097102 | orchestrator | Monday 16 February 2026 03:33:08 +0000 (0:00:00.362) 0:00:05.683 ******* 2026-02-16 03:33:10.097109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-16 03:33:10.097116 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-16 03:33:10.097124 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-16 03:33:10.097131 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-16 03:33:10.097138 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-16 03:33:10.097145 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-16 03:33:10.097152 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-16 03:33:10.097159 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-16 03:33:10.097166 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-16 03:33:10.097173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-16 03:33:10.097180 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-16 03:33:10.097187 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-16 03:33:10.097194 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-16 03:33:10.097202 | orchestrator | 2026-02-16 03:33:10.097209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:10.097223 | orchestrator | Monday 16 February 2026 03:33:08 +0000 (0:00:00.396) 0:00:06.079 ******* 2026-02-16 03:33:10.097230 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:10.097237 | orchestrator | 2026-02-16 03:33:10.097244 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:10.097251 | orchestrator | Monday 16 February 2026 03:33:08 +0000 (0:00:00.204) 0:00:06.284 ******* 2026-02-16 03:33:10.097258 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:10.097266 | orchestrator | 2026-02-16 03:33:10.097273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:10.097280 | orchestrator | Monday 16 February 2026 03:33:09 +0000 (0:00:00.198) 0:00:06.482 ******* 2026-02-16 03:33:10.097287 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:10.097294 | orchestrator | 2026-02-16 03:33:10.097301 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:10.097309 | orchestrator | Monday 16 February 2026 03:33:09 +0000 (0:00:00.207) 0:00:06.689 ******* 2026-02-16 03:33:10.097316 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:10.097323 | orchestrator | 2026-02-16 03:33:10.097330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:10.097337 | orchestrator | Monday 16 February 2026 03:33:09 +0000 (0:00:00.222) 0:00:06.912 ******* 2026-02-16 03:33:10.097344 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:10.097352 | orchestrator | 2026-02-16 03:33:10.097385 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:10.097394 | orchestrator | Monday 16 February 2026 03:33:09 +0000 (0:00:00.205) 0:00:07.118 ******* 2026-02-16 03:33:10.097401 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:10.097408 | orchestrator | 2026-02-16 03:33:10.097415 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:10.097423 | orchestrator | Monday 16 February 2026 03:33:09 +0000 (0:00:00.200) 0:00:07.318 ******* 2026-02-16 03:33:10.097430 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:10.097437 | orchestrator | 2026-02-16 03:33:10.097453 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:17.609292 | orchestrator | Monday 16 February 2026 03:33:10 +0000 (0:00:00.182) 0:00:07.501 ******* 2026-02-16 03:33:17.609428 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:17.609447 | orchestrator | 2026-02-16 03:33:17.609461 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:17.609473 | orchestrator | Monday 16 February 2026 03:33:10 +0000 (0:00:00.187) 0:00:07.689 ******* 2026-02-16 03:33:17.609484 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-16 03:33:17.609496 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-16 03:33:17.609506 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-16 03:33:17.609517 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-16 03:33:17.609528 | orchestrator | 2026-02-16 03:33:17.609539 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:17.609550 | orchestrator | Monday 16 February 2026 03:33:11 +0000 (0:00:01.038) 0:00:08.728 ******* 2026-02-16 03:33:17.609561 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:17.609572 | orchestrator | 2026-02-16 03:33:17.609598 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:17.609610 | orchestrator | Monday 16 February 2026 03:33:11 +0000 (0:00:00.192) 0:00:08.920 ******* 2026-02-16 03:33:17.609621 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:17.609632 | orchestrator | 2026-02-16 03:33:17.609643 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:17.609654 | orchestrator | Monday 16 February 2026 03:33:11 +0000 (0:00:00.201) 0:00:09.121 ******* 2026-02-16 03:33:17.609680 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:17.609702 | orchestrator | 2026-02-16 03:33:17.609713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:17.609725 | orchestrator | Monday 16 February 2026 03:33:11 +0000 (0:00:00.201) 0:00:09.322 ******* 2026-02-16 03:33:17.609761 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:17.609773 | orchestrator | 2026-02-16 03:33:17.609784 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-16 03:33:17.609795 | orchestrator | Monday 16 February 2026 03:33:12 +0000 (0:00:00.209) 0:00:09.532 ******* 2026-02-16 03:33:17.609806 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-16 03:33:17.609817 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-16 03:33:17.609829 | orchestrator | 2026-02-16 03:33:17.609841 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-16 03:33:17.609853 | orchestrator | Monday 16 February 2026 03:33:12 +0000 (0:00:00.182) 0:00:09.714 ******* 2026-02-16 03:33:17.609865 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:17.609877 | orchestrator | 2026-02-16 03:33:17.609890 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-16 03:33:17.609903 | orchestrator | Monday 16 February 2026 03:33:12 +0000 (0:00:00.159) 0:00:09.874 ******* 2026-02-16 03:33:17.609915 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:17.609927 | orchestrator | 2026-02-16 03:33:17.609939 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-16 03:33:17.609952 | orchestrator | Monday 16 February 2026 03:33:12 +0000 (0:00:00.145) 0:00:10.019 ******* 2026-02-16 03:33:17.609965 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:17.609978 | orchestrator | 2026-02-16 03:33:17.609990 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-16 03:33:17.610002 | orchestrator | Monday 16 February 2026 03:33:12 +0000 (0:00:00.146) 0:00:10.166 ******* 2026-02-16 03:33:17.610158 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:33:17.610172 | orchestrator | 2026-02-16 03:33:17.610183 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-16 03:33:17.610194 | orchestrator | Monday 16 February 2026 03:33:12 +0000 (0:00:00.141) 0:00:10.307 ******* 2026-02-16 03:33:17.610206 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2f9a42f5-b575-5e11-9555-a5550e2fae1e'}}) 2026-02-16 03:33:17.610217 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '50d7a967-e09e-512a-aa83-aa9bbdf9ab74'}}) 2026-02-16 03:33:17.610228 | orchestrator | 2026-02-16 03:33:17.610239 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-16 03:33:17.610250 | orchestrator | Monday 16 February 2026 03:33:13 +0000 (0:00:00.165) 0:00:10.473 ******* 2026-02-16 03:33:17.610262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2f9a42f5-b575-5e11-9555-a5550e2fae1e'}})  2026-02-16 03:33:17.610274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '50d7a967-e09e-512a-aa83-aa9bbdf9ab74'}})  2026-02-16 03:33:17.610285 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:17.610296 | orchestrator | 2026-02-16 03:33:17.610307 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-16 03:33:17.610318 | orchestrator | Monday 16 February 2026 03:33:13 +0000 (0:00:00.341) 0:00:10.815 ******* 2026-02-16 03:33:17.610329 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2f9a42f5-b575-5e11-9555-a5550e2fae1e'}})  2026-02-16 03:33:17.610340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '50d7a967-e09e-512a-aa83-aa9bbdf9ab74'}})  2026-02-16 03:33:17.610351 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:17.610362 | orchestrator | 2026-02-16 03:33:17.610436 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-16 03:33:17.610448 | orchestrator | Monday 16 February 2026 03:33:13 +0000 (0:00:00.152) 0:00:10.968 ******* 2026-02-16 03:33:17.610459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2f9a42f5-b575-5e11-9555-a5550e2fae1e'}})  2026-02-16 03:33:17.610502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '50d7a967-e09e-512a-aa83-aa9bbdf9ab74'}})  2026-02-16 03:33:17.610514 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:17.610525 | orchestrator | 2026-02-16 03:33:17.610536 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-16 03:33:17.610547 | orchestrator | Monday 16 February 2026 03:33:13 +0000 (0:00:00.151) 0:00:11.119 ******* 2026-02-16 03:33:17.610558 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:33:17.610569 | orchestrator | 2026-02-16 03:33:17.610580 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-16 03:33:17.610591 | orchestrator | Monday 16 February 2026 03:33:13 +0000 (0:00:00.146) 0:00:11.266 ******* 2026-02-16 03:33:17.610602 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:33:17.610613 | orchestrator | 2026-02-16 03:33:17.610623 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-16 03:33:17.610634 | orchestrator | Monday 16 February 2026 03:33:14 +0000 (0:00:00.159) 0:00:11.425 ******* 2026-02-16 03:33:17.610645 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:17.610656 | orchestrator | 2026-02-16 03:33:17.610674 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-16 03:33:17.610685 | orchestrator | Monday 16 February 2026 03:33:14 +0000 (0:00:00.141) 0:00:11.566 ******* 2026-02-16 03:33:17.610696 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:17.610707 | orchestrator | 2026-02-16 03:33:17.610717 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-16 03:33:17.610728 | orchestrator | Monday 16 February 2026 03:33:14 +0000 (0:00:00.136) 0:00:11.703 ******* 2026-02-16 03:33:17.610739 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:17.610750 | orchestrator | 2026-02-16 03:33:17.610760 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-16 03:33:17.610771 | orchestrator | Monday 16 February 2026 03:33:14 +0000 (0:00:00.137) 0:00:11.841 ******* 2026-02-16 03:33:17.610782 | orchestrator | ok: [testbed-node-3] => { 2026-02-16 03:33:17.610793 | orchestrator |  "ceph_osd_devices": { 2026-02-16 03:33:17.610803 | orchestrator |  "sdb": { 2026-02-16 03:33:17.610814 | orchestrator |  "osd_lvm_uuid": "2f9a42f5-b575-5e11-9555-a5550e2fae1e" 2026-02-16 03:33:17.610826 | orchestrator |  }, 2026-02-16 03:33:17.610837 | orchestrator |  "sdc": { 2026-02-16 03:33:17.610847 | orchestrator |  "osd_lvm_uuid": "50d7a967-e09e-512a-aa83-aa9bbdf9ab74" 2026-02-16 03:33:17.610858 | orchestrator |  } 2026-02-16 03:33:17.610869 | orchestrator |  } 2026-02-16 03:33:17.610880 | orchestrator | } 2026-02-16 03:33:17.610891 | orchestrator | 2026-02-16 03:33:17.610902 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-16 03:33:17.610913 | orchestrator | Monday 16 February 2026 03:33:14 +0000 (0:00:00.149) 0:00:11.991 ******* 2026-02-16 03:33:17.610923 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:17.610934 | orchestrator | 2026-02-16 03:33:17.610945 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-16 03:33:17.610955 | orchestrator | Monday 16 February 2026 03:33:14 +0000 (0:00:00.140) 0:00:12.131 ******* 2026-02-16 03:33:17.610966 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:17.610976 | orchestrator | 2026-02-16 03:33:17.610985 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-16 03:33:17.610995 | orchestrator | Monday 16 February 2026 03:33:14 +0000 (0:00:00.130) 0:00:12.262 ******* 2026-02-16 03:33:17.611005 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:33:17.611014 | orchestrator | 2026-02-16 03:33:17.611024 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-16 03:33:17.611039 | orchestrator | Monday 16 February 2026 03:33:14 +0000 (0:00:00.128) 0:00:12.390 ******* 2026-02-16 03:33:17.611057 | orchestrator | changed: [testbed-node-3] => { 2026-02-16 03:33:17.611082 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-16 03:33:17.611101 | orchestrator |  "ceph_osd_devices": { 2026-02-16 03:33:17.611127 | orchestrator |  "sdb": { 2026-02-16 03:33:17.611144 | orchestrator |  "osd_lvm_uuid": "2f9a42f5-b575-5e11-9555-a5550e2fae1e" 2026-02-16 03:33:17.611160 | orchestrator |  }, 2026-02-16 03:33:17.611177 | orchestrator |  "sdc": { 2026-02-16 03:33:17.611193 | orchestrator |  "osd_lvm_uuid": "50d7a967-e09e-512a-aa83-aa9bbdf9ab74" 2026-02-16 03:33:17.611210 | orchestrator |  } 2026-02-16 03:33:17.611228 | orchestrator |  }, 2026-02-16 03:33:17.611247 | orchestrator |  "lvm_volumes": [ 2026-02-16 03:33:17.611265 | orchestrator |  { 2026-02-16 03:33:17.611281 | orchestrator |  "data": "osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e", 2026-02-16 03:33:17.611297 | orchestrator |  "data_vg": "ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e" 2026-02-16 03:33:17.611307 | orchestrator |  }, 2026-02-16 03:33:17.611316 | orchestrator |  { 2026-02-16 03:33:17.611326 | orchestrator |  "data": "osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74", 2026-02-16 03:33:17.611335 | orchestrator |  "data_vg": "ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74" 2026-02-16 03:33:17.611345 | orchestrator |  } 2026-02-16 03:33:17.611354 | orchestrator |  ] 2026-02-16 03:33:17.611412 | orchestrator |  } 2026-02-16 03:33:17.611425 | orchestrator | } 2026-02-16 03:33:17.611435 | orchestrator | 2026-02-16 03:33:17.611444 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-16 03:33:17.611454 | orchestrator | Monday 16 February 2026 03:33:15 +0000 (0:00:00.415) 0:00:12.806 ******* 2026-02-16 03:33:17.611464 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-16 03:33:17.611474 | orchestrator | 2026-02-16 03:33:17.611483 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-16 03:33:17.611493 | orchestrator | 2026-02-16 03:33:17.611502 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-16 03:33:17.611512 | orchestrator | Monday 16 February 2026 03:33:17 +0000 (0:00:01.709) 0:00:14.515 ******* 2026-02-16 03:33:17.611522 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-16 03:33:17.611531 | orchestrator | 2026-02-16 03:33:17.611541 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-16 03:33:17.611551 | orchestrator | Monday 16 February 2026 03:33:17 +0000 (0:00:00.272) 0:00:14.788 ******* 2026-02-16 03:33:17.611560 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:33:17.611570 | orchestrator | 2026-02-16 03:33:17.611590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:26.617864 | orchestrator | Monday 16 February 2026 03:33:17 +0000 (0:00:00.232) 0:00:15.020 ******* 2026-02-16 03:33:26.617981 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-16 03:33:26.617996 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-16 03:33:26.618005 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-16 03:33:26.618047 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-16 03:33:26.618109 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-16 03:33:26.618121 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-16 03:33:26.618145 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-16 03:33:26.618155 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-16 03:33:26.618164 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-16 03:33:26.618172 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-16 03:33:26.618180 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-16 03:33:26.618234 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-16 03:33:26.618244 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-16 03:33:26.618252 | orchestrator | 2026-02-16 03:33:26.618262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:26.618270 | orchestrator | Monday 16 February 2026 03:33:17 +0000 (0:00:00.387) 0:00:15.407 ******* 2026-02-16 03:33:26.618279 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:26.618287 | orchestrator | 2026-02-16 03:33:26.618295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:26.618303 | orchestrator | Monday 16 February 2026 03:33:18 +0000 (0:00:00.203) 0:00:15.611 ******* 2026-02-16 03:33:26.618311 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:26.618319 | orchestrator | 2026-02-16 03:33:26.618327 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:26.618336 | orchestrator | Monday 16 February 2026 03:33:18 +0000 (0:00:00.210) 0:00:15.821 ******* 2026-02-16 03:33:26.618343 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:26.618351 | orchestrator | 2026-02-16 03:33:26.618359 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:26.618367 | orchestrator | Monday 16 February 2026 03:33:18 +0000 (0:00:00.208) 0:00:16.029 ******* 2026-02-16 03:33:26.618399 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:26.618414 | orchestrator | 2026-02-16 03:33:26.618427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:26.618442 | orchestrator | Monday 16 February 2026 03:33:19 +0000 (0:00:00.594) 0:00:16.623 ******* 2026-02-16 03:33:26.618456 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:26.618470 | orchestrator | 2026-02-16 03:33:26.618496 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:26.618506 | orchestrator | Monday 16 February 2026 03:33:19 +0000 (0:00:00.219) 0:00:16.843 ******* 2026-02-16 03:33:26.618514 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:26.618524 | orchestrator | 2026-02-16 03:33:26.618533 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:26.618543 | orchestrator | Monday 16 February 2026 03:33:19 +0000 (0:00:00.204) 0:00:17.048 ******* 2026-02-16 03:33:26.618553 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:26.618562 | orchestrator | 2026-02-16 03:33:26.618571 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:26.618580 | orchestrator | Monday 16 February 2026 03:33:19 +0000 (0:00:00.196) 0:00:17.245 ******* 2026-02-16 03:33:26.618589 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:26.618598 | orchestrator | 2026-02-16 03:33:26.618607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:26.618616 | orchestrator | Monday 16 February 2026 03:33:20 +0000 (0:00:00.206) 0:00:17.452 ******* 2026-02-16 03:33:26.618626 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29) 2026-02-16 03:33:26.618635 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29) 2026-02-16 03:33:26.618645 | orchestrator | 2026-02-16 03:33:26.618653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:26.618664 | orchestrator | Monday 16 February 2026 03:33:20 +0000 (0:00:00.424) 0:00:17.876 ******* 2026-02-16 03:33:26.618673 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_769208b9-3be0-45fa-bf10-39ffe30cf829) 2026-02-16 03:33:26.618682 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_769208b9-3be0-45fa-bf10-39ffe30cf829) 2026-02-16 03:33:26.618691 | orchestrator | 2026-02-16 03:33:26.618700 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:26.618711 | orchestrator | Monday 16 February 2026 03:33:20 +0000 (0:00:00.433) 0:00:18.310 ******* 2026-02-16 03:33:26.618735 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0857a7ec-98e0-4b3a-95cc-40567a4f4a8e) 2026-02-16 03:33:26.618748 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0857a7ec-98e0-4b3a-95cc-40567a4f4a8e) 2026-02-16 03:33:26.618760 | orchestrator | 2026-02-16 03:33:26.618774 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:26.618806 | orchestrator | Monday 16 February 2026 03:33:21 +0000 (0:00:00.452) 0:00:18.762 ******* 2026-02-16 03:33:26.618819 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_57ea9400-2602-4802-b9b7-802a488f4705) 2026-02-16 03:33:26.618832 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_57ea9400-2602-4802-b9b7-802a488f4705) 2026-02-16 03:33:26.618844 | orchestrator | 2026-02-16 03:33:26.618856 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:26.618869 | orchestrator | Monday 16 February 2026 03:33:22 +0000 (0:00:00.679) 0:00:19.442 ******* 2026-02-16 03:33:26.618882 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-16 03:33:26.618897 | orchestrator | 2026-02-16 03:33:26.618909 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:26.618931 | orchestrator | Monday 16 February 2026 03:33:22 +0000 (0:00:00.561) 0:00:20.004 ******* 2026-02-16 03:33:26.618944 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-16 03:33:26.618957 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-16 03:33:26.618969 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-16 03:33:26.618983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-16 03:33:26.618997 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-16 03:33:26.619009 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-16 03:33:26.619036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-16 03:33:26.619045 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-16 03:33:26.619053 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-16 03:33:26.619060 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-16 03:33:26.619068 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-16 03:33:26.619076 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-16 03:33:26.619084 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-16 03:33:26.619092 | orchestrator | 2026-02-16 03:33:26.619100 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:26.619108 | orchestrator | Monday 16 February 2026 03:33:23 +0000 (0:00:00.867) 0:00:20.872 ******* 2026-02-16 03:33:26.619116 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:26.619124 | orchestrator | 2026-02-16 03:33:26.619132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:26.619140 | orchestrator | Monday 16 February 2026 03:33:23 +0000 (0:00:00.202) 0:00:21.074 ******* 2026-02-16 03:33:26.619148 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:26.619155 | orchestrator | 2026-02-16 03:33:26.619163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:26.619171 | orchestrator | Monday 16 February 2026 03:33:23 +0000 (0:00:00.207) 0:00:21.281 ******* 2026-02-16 03:33:26.619179 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:26.619187 | orchestrator | 2026-02-16 03:33:26.619195 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:26.619211 | orchestrator | Monday 16 February 2026 03:33:24 +0000 (0:00:00.198) 0:00:21.480 ******* 2026-02-16 03:33:26.619219 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:26.619227 | orchestrator | 2026-02-16 03:33:26.619235 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:26.619243 | orchestrator | Monday 16 February 2026 03:33:24 +0000 (0:00:00.196) 0:00:21.676 ******* 2026-02-16 03:33:26.619250 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:26.619258 | orchestrator | 2026-02-16 03:33:26.619266 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:26.619274 | orchestrator | Monday 16 February 2026 03:33:24 +0000 (0:00:00.213) 0:00:21.889 ******* 2026-02-16 03:33:26.619282 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:26.619290 | orchestrator | 2026-02-16 03:33:26.619297 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:26.619305 | orchestrator | Monday 16 February 2026 03:33:24 +0000 (0:00:00.210) 0:00:22.100 ******* 2026-02-16 03:33:26.619313 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:26.619321 | orchestrator | 2026-02-16 03:33:26.619328 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:26.619336 | orchestrator | Monday 16 February 2026 03:33:24 +0000 (0:00:00.209) 0:00:22.309 ******* 2026-02-16 03:33:26.619344 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:26.619352 | orchestrator | 2026-02-16 03:33:26.619360 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:26.619368 | orchestrator | Monday 16 February 2026 03:33:25 +0000 (0:00:00.201) 0:00:22.511 ******* 2026-02-16 03:33:26.619400 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-16 03:33:26.619410 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-16 03:33:26.619419 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-16 03:33:26.619427 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-16 03:33:26.619434 | orchestrator | 2026-02-16 03:33:26.619442 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:26.619450 | orchestrator | Monday 16 February 2026 03:33:25 +0000 (0:00:00.866) 0:00:23.377 ******* 2026-02-16 03:33:26.619458 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:32.441527 | orchestrator | 2026-02-16 03:33:32.441635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:32.441654 | orchestrator | Monday 16 February 2026 03:33:26 +0000 (0:00:00.652) 0:00:24.030 ******* 2026-02-16 03:33:32.441666 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:32.441678 | orchestrator | 2026-02-16 03:33:32.441689 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:32.441700 | orchestrator | Monday 16 February 2026 03:33:26 +0000 (0:00:00.207) 0:00:24.237 ******* 2026-02-16 03:33:32.441711 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:32.441722 | orchestrator | 2026-02-16 03:33:32.441734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:32.441745 | orchestrator | Monday 16 February 2026 03:33:27 +0000 (0:00:00.200) 0:00:24.438 ******* 2026-02-16 03:33:32.441756 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:32.441767 | orchestrator | 2026-02-16 03:33:32.441793 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-16 03:33:32.441805 | orchestrator | Monday 16 February 2026 03:33:27 +0000 (0:00:00.218) 0:00:24.656 ******* 2026-02-16 03:33:32.441816 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-16 03:33:32.441827 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-16 03:33:32.441838 | orchestrator | 2026-02-16 03:33:32.441849 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-16 03:33:32.441859 | orchestrator | Monday 16 February 2026 03:33:27 +0000 (0:00:00.171) 0:00:24.828 ******* 2026-02-16 03:33:32.441870 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:32.441881 | orchestrator | 2026-02-16 03:33:32.441892 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-16 03:33:32.441926 | orchestrator | Monday 16 February 2026 03:33:27 +0000 (0:00:00.147) 0:00:24.975 ******* 2026-02-16 03:33:32.441938 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:32.441949 | orchestrator | 2026-02-16 03:33:32.441960 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-16 03:33:32.441971 | orchestrator | Monday 16 February 2026 03:33:27 +0000 (0:00:00.148) 0:00:25.124 ******* 2026-02-16 03:33:32.441982 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:32.441992 | orchestrator | 2026-02-16 03:33:32.442003 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-16 03:33:32.442073 | orchestrator | Monday 16 February 2026 03:33:27 +0000 (0:00:00.151) 0:00:25.276 ******* 2026-02-16 03:33:32.442086 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:33:32.442098 | orchestrator | 2026-02-16 03:33:32.442109 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-16 03:33:32.442119 | orchestrator | Monday 16 February 2026 03:33:27 +0000 (0:00:00.132) 0:00:25.409 ******* 2026-02-16 03:33:32.442131 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'}}) 2026-02-16 03:33:32.442142 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3ec6a818-dc71-5cb4-ac47-83f209d09bca'}}) 2026-02-16 03:33:32.442153 | orchestrator | 2026-02-16 03:33:32.442164 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-16 03:33:32.442175 | orchestrator | Monday 16 February 2026 03:33:28 +0000 (0:00:00.179) 0:00:25.589 ******* 2026-02-16 03:33:32.442186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'}})  2026-02-16 03:33:32.442199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3ec6a818-dc71-5cb4-ac47-83f209d09bca'}})  2026-02-16 03:33:32.442210 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:32.442221 | orchestrator | 2026-02-16 03:33:32.442232 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-16 03:33:32.442242 | orchestrator | Monday 16 February 2026 03:33:28 +0000 (0:00:00.146) 0:00:25.735 ******* 2026-02-16 03:33:32.442253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'}})  2026-02-16 03:33:32.442264 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3ec6a818-dc71-5cb4-ac47-83f209d09bca'}})  2026-02-16 03:33:32.442275 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:32.442286 | orchestrator | 2026-02-16 03:33:32.442297 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-16 03:33:32.442307 | orchestrator | Monday 16 February 2026 03:33:28 +0000 (0:00:00.340) 0:00:26.076 ******* 2026-02-16 03:33:32.442318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'}})  2026-02-16 03:33:32.442329 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3ec6a818-dc71-5cb4-ac47-83f209d09bca'}})  2026-02-16 03:33:32.442340 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:32.442365 | orchestrator | 2026-02-16 03:33:32.442400 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-16 03:33:32.442412 | orchestrator | Monday 16 February 2026 03:33:28 +0000 (0:00:00.153) 0:00:26.229 ******* 2026-02-16 03:33:32.442423 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:33:32.442434 | orchestrator | 2026-02-16 03:33:32.442444 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-16 03:33:32.442455 | orchestrator | Monday 16 February 2026 03:33:28 +0000 (0:00:00.143) 0:00:26.373 ******* 2026-02-16 03:33:32.442466 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:33:32.442477 | orchestrator | 2026-02-16 03:33:32.442488 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-16 03:33:32.442508 | orchestrator | Monday 16 February 2026 03:33:29 +0000 (0:00:00.134) 0:00:26.507 ******* 2026-02-16 03:33:32.442537 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:32.442549 | orchestrator | 2026-02-16 03:33:32.442560 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-16 03:33:32.442571 | orchestrator | Monday 16 February 2026 03:33:29 +0000 (0:00:00.140) 0:00:26.648 ******* 2026-02-16 03:33:32.442582 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:32.442593 | orchestrator | 2026-02-16 03:33:32.442604 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-16 03:33:32.442614 | orchestrator | Monday 16 February 2026 03:33:29 +0000 (0:00:00.123) 0:00:26.772 ******* 2026-02-16 03:33:32.442625 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:32.442636 | orchestrator | 2026-02-16 03:33:32.442647 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-16 03:33:32.442658 | orchestrator | Monday 16 February 2026 03:33:29 +0000 (0:00:00.128) 0:00:26.901 ******* 2026-02-16 03:33:32.442669 | orchestrator | ok: [testbed-node-4] => { 2026-02-16 03:33:32.442680 | orchestrator |  "ceph_osd_devices": { 2026-02-16 03:33:32.442697 | orchestrator |  "sdb": { 2026-02-16 03:33:32.442709 | orchestrator |  "osd_lvm_uuid": "ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d" 2026-02-16 03:33:32.442720 | orchestrator |  }, 2026-02-16 03:33:32.442731 | orchestrator |  "sdc": { 2026-02-16 03:33:32.442742 | orchestrator |  "osd_lvm_uuid": "3ec6a818-dc71-5cb4-ac47-83f209d09bca" 2026-02-16 03:33:32.442753 | orchestrator |  } 2026-02-16 03:33:32.442764 | orchestrator |  } 2026-02-16 03:33:32.442775 | orchestrator | } 2026-02-16 03:33:32.442786 | orchestrator | 2026-02-16 03:33:32.442797 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-16 03:33:32.442808 | orchestrator | Monday 16 February 2026 03:33:29 +0000 (0:00:00.160) 0:00:27.061 ******* 2026-02-16 03:33:32.442819 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:32.442830 | orchestrator | 2026-02-16 03:33:32.442841 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-16 03:33:32.442852 | orchestrator | Monday 16 February 2026 03:33:29 +0000 (0:00:00.133) 0:00:27.194 ******* 2026-02-16 03:33:32.442863 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:32.442873 | orchestrator | 2026-02-16 03:33:32.442884 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-16 03:33:32.442895 | orchestrator | Monday 16 February 2026 03:33:29 +0000 (0:00:00.122) 0:00:27.317 ******* 2026-02-16 03:33:32.442906 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:33:32.442917 | orchestrator | 2026-02-16 03:33:32.442927 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-16 03:33:32.442938 | orchestrator | Monday 16 February 2026 03:33:30 +0000 (0:00:00.134) 0:00:27.451 ******* 2026-02-16 03:33:32.442949 | orchestrator | changed: [testbed-node-4] => { 2026-02-16 03:33:32.442960 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-16 03:33:32.442971 | orchestrator |  "ceph_osd_devices": { 2026-02-16 03:33:32.442982 | orchestrator |  "sdb": { 2026-02-16 03:33:32.442992 | orchestrator |  "osd_lvm_uuid": "ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d" 2026-02-16 03:33:32.443003 | orchestrator |  }, 2026-02-16 03:33:32.443014 | orchestrator |  "sdc": { 2026-02-16 03:33:32.443025 | orchestrator |  "osd_lvm_uuid": "3ec6a818-dc71-5cb4-ac47-83f209d09bca" 2026-02-16 03:33:32.443036 | orchestrator |  } 2026-02-16 03:33:32.443047 | orchestrator |  }, 2026-02-16 03:33:32.443057 | orchestrator |  "lvm_volumes": [ 2026-02-16 03:33:32.443068 | orchestrator |  { 2026-02-16 03:33:32.443079 | orchestrator |  "data": "osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d", 2026-02-16 03:33:32.443090 | orchestrator |  "data_vg": "ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d" 2026-02-16 03:33:32.443101 | orchestrator |  }, 2026-02-16 03:33:32.443112 | orchestrator |  { 2026-02-16 03:33:32.443130 | orchestrator |  "data": "osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca", 2026-02-16 03:33:32.443141 | orchestrator |  "data_vg": "ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca" 2026-02-16 03:33:32.443151 | orchestrator |  } 2026-02-16 03:33:32.443162 | orchestrator |  ] 2026-02-16 03:33:32.443173 | orchestrator |  } 2026-02-16 03:33:32.443184 | orchestrator | } 2026-02-16 03:33:32.443195 | orchestrator | 2026-02-16 03:33:32.443206 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-16 03:33:32.443217 | orchestrator | Monday 16 February 2026 03:33:30 +0000 (0:00:00.394) 0:00:27.845 ******* 2026-02-16 03:33:32.443228 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-16 03:33:32.443239 | orchestrator | 2026-02-16 03:33:32.443249 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-16 03:33:32.443260 | orchestrator | 2026-02-16 03:33:32.443271 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-16 03:33:32.443282 | orchestrator | Monday 16 February 2026 03:33:31 +0000 (0:00:01.130) 0:00:28.976 ******* 2026-02-16 03:33:32.443293 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-16 03:33:32.443303 | orchestrator | 2026-02-16 03:33:32.443314 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-16 03:33:32.443325 | orchestrator | Monday 16 February 2026 03:33:31 +0000 (0:00:00.267) 0:00:29.243 ******* 2026-02-16 03:33:32.443336 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:33:32.443347 | orchestrator | 2026-02-16 03:33:32.443357 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:32.443368 | orchestrator | Monday 16 February 2026 03:33:32 +0000 (0:00:00.237) 0:00:29.481 ******* 2026-02-16 03:33:32.443379 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-16 03:33:32.443417 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-16 03:33:32.443428 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-16 03:33:32.443439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-16 03:33:32.443450 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-16 03:33:32.443468 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-16 03:33:40.933205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-16 03:33:40.933314 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-16 03:33:40.933329 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-16 03:33:40.933341 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-16 03:33:40.933352 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-16 03:33:40.933364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-16 03:33:40.933392 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-16 03:33:40.933488 | orchestrator | 2026-02-16 03:33:40.933502 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:40.933515 | orchestrator | Monday 16 February 2026 03:33:32 +0000 (0:00:00.369) 0:00:29.851 ******* 2026-02-16 03:33:40.933526 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:40.933538 | orchestrator | 2026-02-16 03:33:40.933550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:40.933560 | orchestrator | Monday 16 February 2026 03:33:32 +0000 (0:00:00.198) 0:00:30.049 ******* 2026-02-16 03:33:40.933571 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:40.933582 | orchestrator | 2026-02-16 03:33:40.933593 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:40.933625 | orchestrator | Monday 16 February 2026 03:33:32 +0000 (0:00:00.207) 0:00:30.256 ******* 2026-02-16 03:33:40.933640 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:40.933659 | orchestrator | 2026-02-16 03:33:40.933677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:40.933697 | orchestrator | Monday 16 February 2026 03:33:33 +0000 (0:00:00.213) 0:00:30.470 ******* 2026-02-16 03:33:40.933715 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:40.933734 | orchestrator | 2026-02-16 03:33:40.933750 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:40.933764 | orchestrator | Monday 16 February 2026 03:33:33 +0000 (0:00:00.610) 0:00:31.081 ******* 2026-02-16 03:33:40.933775 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:40.933787 | orchestrator | 2026-02-16 03:33:40.933800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:40.933813 | orchestrator | Monday 16 February 2026 03:33:33 +0000 (0:00:00.236) 0:00:31.317 ******* 2026-02-16 03:33:40.933825 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:40.933837 | orchestrator | 2026-02-16 03:33:40.933849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:40.933862 | orchestrator | Monday 16 February 2026 03:33:34 +0000 (0:00:00.238) 0:00:31.556 ******* 2026-02-16 03:33:40.933874 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:40.933886 | orchestrator | 2026-02-16 03:33:40.933898 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:40.933910 | orchestrator | Monday 16 February 2026 03:33:34 +0000 (0:00:00.202) 0:00:31.758 ******* 2026-02-16 03:33:40.933922 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:40.933934 | orchestrator | 2026-02-16 03:33:40.933945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:40.933957 | orchestrator | Monday 16 February 2026 03:33:34 +0000 (0:00:00.198) 0:00:31.957 ******* 2026-02-16 03:33:40.933970 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d) 2026-02-16 03:33:40.933984 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d) 2026-02-16 03:33:40.933995 | orchestrator | 2026-02-16 03:33:40.934008 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:40.934087 | orchestrator | Monday 16 February 2026 03:33:34 +0000 (0:00:00.415) 0:00:32.372 ******* 2026-02-16 03:33:40.934100 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_864a7dfe-a330-4dca-9b66-49ca9e8841e5) 2026-02-16 03:33:40.934111 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_864a7dfe-a330-4dca-9b66-49ca9e8841e5) 2026-02-16 03:33:40.934122 | orchestrator | 2026-02-16 03:33:40.934132 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:40.934143 | orchestrator | Monday 16 February 2026 03:33:35 +0000 (0:00:00.406) 0:00:32.779 ******* 2026-02-16 03:33:40.934164 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_560fea90-96fc-4e98-a264-4fc86723b569) 2026-02-16 03:33:40.934176 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_560fea90-96fc-4e98-a264-4fc86723b569) 2026-02-16 03:33:40.934186 | orchestrator | 2026-02-16 03:33:40.934197 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:40.934208 | orchestrator | Monday 16 February 2026 03:33:35 +0000 (0:00:00.439) 0:00:33.218 ******* 2026-02-16 03:33:40.934219 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_22f5929b-f2f1-4a02-b80c-ace7dc1afd6d) 2026-02-16 03:33:40.934230 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_22f5929b-f2f1-4a02-b80c-ace7dc1afd6d) 2026-02-16 03:33:40.934241 | orchestrator | 2026-02-16 03:33:40.934252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:33:40.934263 | orchestrator | Monday 16 February 2026 03:33:36 +0000 (0:00:00.410) 0:00:33.629 ******* 2026-02-16 03:33:40.934285 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-16 03:33:40.934296 | orchestrator | 2026-02-16 03:33:40.934307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:40.934339 | orchestrator | Monday 16 February 2026 03:33:36 +0000 (0:00:00.334) 0:00:33.963 ******* 2026-02-16 03:33:40.934350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-16 03:33:40.934361 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-16 03:33:40.934372 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-16 03:33:40.934383 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-16 03:33:40.934393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-16 03:33:40.934462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-16 03:33:40.934491 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-16 03:33:40.934511 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-16 03:33:40.934529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-16 03:33:40.934544 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-16 03:33:40.934554 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-16 03:33:40.934565 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-16 03:33:40.934576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-16 03:33:40.934586 | orchestrator | 2026-02-16 03:33:40.934597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:40.934608 | orchestrator | Monday 16 February 2026 03:33:37 +0000 (0:00:00.582) 0:00:34.546 ******* 2026-02-16 03:33:40.934619 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:40.934629 | orchestrator | 2026-02-16 03:33:40.934640 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:40.934651 | orchestrator | Monday 16 February 2026 03:33:37 +0000 (0:00:00.204) 0:00:34.751 ******* 2026-02-16 03:33:40.934661 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:40.934672 | orchestrator | 2026-02-16 03:33:40.934682 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:40.934693 | orchestrator | Monday 16 February 2026 03:33:37 +0000 (0:00:00.211) 0:00:34.963 ******* 2026-02-16 03:33:40.934703 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:40.934714 | orchestrator | 2026-02-16 03:33:40.934725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:40.934735 | orchestrator | Monday 16 February 2026 03:33:37 +0000 (0:00:00.212) 0:00:35.175 ******* 2026-02-16 03:33:40.934746 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:40.934757 | orchestrator | 2026-02-16 03:33:40.934768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:40.934778 | orchestrator | Monday 16 February 2026 03:33:37 +0000 (0:00:00.203) 0:00:35.379 ******* 2026-02-16 03:33:40.934788 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:40.934799 | orchestrator | 2026-02-16 03:33:40.934810 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:40.934820 | orchestrator | Monday 16 February 2026 03:33:38 +0000 (0:00:00.196) 0:00:35.576 ******* 2026-02-16 03:33:40.934831 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:40.934842 | orchestrator | 2026-02-16 03:33:40.934852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:40.934873 | orchestrator | Monday 16 February 2026 03:33:38 +0000 (0:00:00.208) 0:00:35.784 ******* 2026-02-16 03:33:40.934884 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:40.934894 | orchestrator | 2026-02-16 03:33:40.934905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:40.934915 | orchestrator | Monday 16 February 2026 03:33:38 +0000 (0:00:00.205) 0:00:35.990 ******* 2026-02-16 03:33:40.934926 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:40.934937 | orchestrator | 2026-02-16 03:33:40.934947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:40.934958 | orchestrator | Monday 16 February 2026 03:33:38 +0000 (0:00:00.205) 0:00:36.195 ******* 2026-02-16 03:33:40.934969 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-16 03:33:40.934980 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-16 03:33:40.934990 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-16 03:33:40.935001 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-16 03:33:40.935012 | orchestrator | 2026-02-16 03:33:40.935022 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:40.935033 | orchestrator | Monday 16 February 2026 03:33:39 +0000 (0:00:00.868) 0:00:37.064 ******* 2026-02-16 03:33:40.935044 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:40.935054 | orchestrator | 2026-02-16 03:33:40.935065 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:40.935075 | orchestrator | Monday 16 February 2026 03:33:39 +0000 (0:00:00.197) 0:00:37.261 ******* 2026-02-16 03:33:40.935086 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:40.935096 | orchestrator | 2026-02-16 03:33:40.935107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:40.935117 | orchestrator | Monday 16 February 2026 03:33:40 +0000 (0:00:00.211) 0:00:37.473 ******* 2026-02-16 03:33:40.935128 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:40.935138 | orchestrator | 2026-02-16 03:33:40.935149 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:33:40.935160 | orchestrator | Monday 16 February 2026 03:33:40 +0000 (0:00:00.651) 0:00:38.125 ******* 2026-02-16 03:33:40.935171 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:40.935181 | orchestrator | 2026-02-16 03:33:40.935200 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-16 03:33:45.082750 | orchestrator | Monday 16 February 2026 03:33:40 +0000 (0:00:00.215) 0:00:38.341 ******* 2026-02-16 03:33:45.082836 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-16 03:33:45.082847 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-16 03:33:45.082855 | orchestrator | 2026-02-16 03:33:45.082863 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-16 03:33:45.082872 | orchestrator | Monday 16 February 2026 03:33:41 +0000 (0:00:00.175) 0:00:38.516 ******* 2026-02-16 03:33:45.082880 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:45.082887 | orchestrator | 2026-02-16 03:33:45.082896 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-16 03:33:45.082904 | orchestrator | Monday 16 February 2026 03:33:41 +0000 (0:00:00.144) 0:00:38.660 ******* 2026-02-16 03:33:45.082926 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:45.082934 | orchestrator | 2026-02-16 03:33:45.082941 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-16 03:33:45.082948 | orchestrator | Monday 16 February 2026 03:33:41 +0000 (0:00:00.151) 0:00:38.811 ******* 2026-02-16 03:33:45.082955 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:45.082963 | orchestrator | 2026-02-16 03:33:45.082970 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-16 03:33:45.082977 | orchestrator | Monday 16 February 2026 03:33:41 +0000 (0:00:00.139) 0:00:38.951 ******* 2026-02-16 03:33:45.082984 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:33:45.083022 | orchestrator | 2026-02-16 03:33:45.083030 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-16 03:33:45.083055 | orchestrator | Monday 16 February 2026 03:33:41 +0000 (0:00:00.147) 0:00:39.099 ******* 2026-02-16 03:33:45.083063 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '10a0662d-59e9-5a43-af5c-1b6d671b7fa5'}}) 2026-02-16 03:33:45.083071 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f418f421-cc32-53ce-b421-39353fe37c02'}}) 2026-02-16 03:33:45.083078 | orchestrator | 2026-02-16 03:33:45.083085 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-16 03:33:45.083092 | orchestrator | Monday 16 February 2026 03:33:41 +0000 (0:00:00.178) 0:00:39.277 ******* 2026-02-16 03:33:45.083100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '10a0662d-59e9-5a43-af5c-1b6d671b7fa5'}})  2026-02-16 03:33:45.083109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f418f421-cc32-53ce-b421-39353fe37c02'}})  2026-02-16 03:33:45.083117 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:45.083124 | orchestrator | 2026-02-16 03:33:45.083131 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-16 03:33:45.083138 | orchestrator | Monday 16 February 2026 03:33:42 +0000 (0:00:00.167) 0:00:39.445 ******* 2026-02-16 03:33:45.083145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '10a0662d-59e9-5a43-af5c-1b6d671b7fa5'}})  2026-02-16 03:33:45.083153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f418f421-cc32-53ce-b421-39353fe37c02'}})  2026-02-16 03:33:45.083160 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:45.083167 | orchestrator | 2026-02-16 03:33:45.083174 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-16 03:33:45.083181 | orchestrator | Monday 16 February 2026 03:33:42 +0000 (0:00:00.168) 0:00:39.614 ******* 2026-02-16 03:33:45.083189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '10a0662d-59e9-5a43-af5c-1b6d671b7fa5'}})  2026-02-16 03:33:45.083196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f418f421-cc32-53ce-b421-39353fe37c02'}})  2026-02-16 03:33:45.083203 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:45.083210 | orchestrator | 2026-02-16 03:33:45.083218 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-16 03:33:45.083225 | orchestrator | Monday 16 February 2026 03:33:42 +0000 (0:00:00.153) 0:00:39.768 ******* 2026-02-16 03:33:45.083232 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:33:45.083239 | orchestrator | 2026-02-16 03:33:45.083246 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-16 03:33:45.083253 | orchestrator | Monday 16 February 2026 03:33:42 +0000 (0:00:00.145) 0:00:39.914 ******* 2026-02-16 03:33:45.083261 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:33:45.083268 | orchestrator | 2026-02-16 03:33:45.083275 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-16 03:33:45.083282 | orchestrator | Monday 16 February 2026 03:33:42 +0000 (0:00:00.356) 0:00:40.270 ******* 2026-02-16 03:33:45.083291 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:45.083299 | orchestrator | 2026-02-16 03:33:45.083308 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-16 03:33:45.083316 | orchestrator | Monday 16 February 2026 03:33:42 +0000 (0:00:00.120) 0:00:40.391 ******* 2026-02-16 03:33:45.083325 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:45.083333 | orchestrator | 2026-02-16 03:33:45.083342 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-16 03:33:45.083351 | orchestrator | Monday 16 February 2026 03:33:43 +0000 (0:00:00.136) 0:00:40.527 ******* 2026-02-16 03:33:45.083361 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:45.083374 | orchestrator | 2026-02-16 03:33:45.083386 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-16 03:33:45.083429 | orchestrator | Monday 16 February 2026 03:33:43 +0000 (0:00:00.143) 0:00:40.671 ******* 2026-02-16 03:33:45.083444 | orchestrator | ok: [testbed-node-5] => { 2026-02-16 03:33:45.083458 | orchestrator |  "ceph_osd_devices": { 2026-02-16 03:33:45.083471 | orchestrator |  "sdb": { 2026-02-16 03:33:45.083497 | orchestrator |  "osd_lvm_uuid": "10a0662d-59e9-5a43-af5c-1b6d671b7fa5" 2026-02-16 03:33:45.083507 | orchestrator |  }, 2026-02-16 03:33:45.083516 | orchestrator |  "sdc": { 2026-02-16 03:33:45.083525 | orchestrator |  "osd_lvm_uuid": "f418f421-cc32-53ce-b421-39353fe37c02" 2026-02-16 03:33:45.083533 | orchestrator |  } 2026-02-16 03:33:45.083542 | orchestrator |  } 2026-02-16 03:33:45.083553 | orchestrator | } 2026-02-16 03:33:45.083566 | orchestrator | 2026-02-16 03:33:45.083579 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-16 03:33:45.083590 | orchestrator | Monday 16 February 2026 03:33:43 +0000 (0:00:00.151) 0:00:40.822 ******* 2026-02-16 03:33:45.083602 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:45.083613 | orchestrator | 2026-02-16 03:33:45.083626 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-16 03:33:45.083661 | orchestrator | Monday 16 February 2026 03:33:43 +0000 (0:00:00.138) 0:00:40.961 ******* 2026-02-16 03:33:45.083675 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:45.083687 | orchestrator | 2026-02-16 03:33:45.083700 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-16 03:33:45.083711 | orchestrator | Monday 16 February 2026 03:33:43 +0000 (0:00:00.145) 0:00:41.107 ******* 2026-02-16 03:33:45.083722 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:33:45.083733 | orchestrator | 2026-02-16 03:33:45.083744 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-16 03:33:45.083755 | orchestrator | Monday 16 February 2026 03:33:43 +0000 (0:00:00.135) 0:00:41.242 ******* 2026-02-16 03:33:45.083767 | orchestrator | changed: [testbed-node-5] => { 2026-02-16 03:33:45.083779 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-16 03:33:45.083790 | orchestrator |  "ceph_osd_devices": { 2026-02-16 03:33:45.083802 | orchestrator |  "sdb": { 2026-02-16 03:33:45.083814 | orchestrator |  "osd_lvm_uuid": "10a0662d-59e9-5a43-af5c-1b6d671b7fa5" 2026-02-16 03:33:45.083825 | orchestrator |  }, 2026-02-16 03:33:45.083836 | orchestrator |  "sdc": { 2026-02-16 03:33:45.083848 | orchestrator |  "osd_lvm_uuid": "f418f421-cc32-53ce-b421-39353fe37c02" 2026-02-16 03:33:45.083860 | orchestrator |  } 2026-02-16 03:33:45.083873 | orchestrator |  }, 2026-02-16 03:33:45.083885 | orchestrator |  "lvm_volumes": [ 2026-02-16 03:33:45.083898 | orchestrator |  { 2026-02-16 03:33:45.083909 | orchestrator |  "data": "osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5", 2026-02-16 03:33:45.083922 | orchestrator |  "data_vg": "ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5" 2026-02-16 03:33:45.083930 | orchestrator |  }, 2026-02-16 03:33:45.083938 | orchestrator |  { 2026-02-16 03:33:45.083945 | orchestrator |  "data": "osd-block-f418f421-cc32-53ce-b421-39353fe37c02", 2026-02-16 03:33:45.083952 | orchestrator |  "data_vg": "ceph-f418f421-cc32-53ce-b421-39353fe37c02" 2026-02-16 03:33:45.083959 | orchestrator |  } 2026-02-16 03:33:45.083972 | orchestrator |  ] 2026-02-16 03:33:45.083984 | orchestrator |  } 2026-02-16 03:33:45.083996 | orchestrator | } 2026-02-16 03:33:45.084007 | orchestrator | 2026-02-16 03:33:45.084018 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-16 03:33:45.084029 | orchestrator | Monday 16 February 2026 03:33:44 +0000 (0:00:00.239) 0:00:41.482 ******* 2026-02-16 03:33:45.084040 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-16 03:33:45.084051 | orchestrator | 2026-02-16 03:33:45.084061 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:33:45.084073 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-16 03:33:45.084102 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-16 03:33:45.084114 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-16 03:33:45.084128 | orchestrator | 2026-02-16 03:33:45.084139 | orchestrator | 2026-02-16 03:33:45.084152 | orchestrator | 2026-02-16 03:33:45.084164 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:33:45.084175 | orchestrator | Monday 16 February 2026 03:33:45 +0000 (0:00:00.995) 0:00:42.478 ******* 2026-02-16 03:33:45.084186 | orchestrator | =============================================================================== 2026-02-16 03:33:45.084197 | orchestrator | Write configuration file ------------------------------------------------ 3.84s 2026-02-16 03:33:45.084208 | orchestrator | Add known partitions to the list of available block devices ------------- 1.85s 2026-02-16 03:33:45.084219 | orchestrator | Add known links to the list of available block devices ------------------ 1.29s 2026-02-16 03:33:45.084230 | orchestrator | Print configuration data ------------------------------------------------ 1.05s 2026-02-16 03:33:45.084243 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2026-02-16 03:33:45.084254 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2026-02-16 03:33:45.084265 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2026-02-16 03:33:45.084277 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2026-02-16 03:33:45.084288 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.79s 2026-02-16 03:33:45.084299 | orchestrator | Get initial list of available block devices ----------------------------- 0.70s 2026-02-16 03:33:45.084310 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-02-16 03:33:45.084321 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.66s 2026-02-16 03:33:45.084334 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.66s 2026-02-16 03:33:45.084361 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2026-02-16 03:33:45.467135 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2026-02-16 03:33:45.467222 | orchestrator | Set OSD devices config data --------------------------------------------- 0.65s 2026-02-16 03:33:45.467234 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-02-16 03:33:45.467243 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-02-16 03:33:45.467251 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2026-02-16 03:33:45.467259 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2026-02-16 03:34:07.894350 | orchestrator | 2026-02-16 03:34:07 | INFO  | Task 09fae301-f1e6-473e-9386-217ae45388fc (sync inventory) is running in background. Output coming soon. 2026-02-16 03:34:33.715669 | orchestrator | 2026-02-16 03:34:09 | INFO  | Starting group_vars file reorganization 2026-02-16 03:34:33.715788 | orchestrator | 2026-02-16 03:34:09 | INFO  | Moved 0 file(s) to their respective directories 2026-02-16 03:34:33.715813 | orchestrator | 2026-02-16 03:34:09 | INFO  | Group_vars file reorganization completed 2026-02-16 03:34:33.715832 | orchestrator | 2026-02-16 03:34:12 | INFO  | Starting variable preparation from inventory 2026-02-16 03:34:33.715852 | orchestrator | 2026-02-16 03:34:14 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-16 03:34:33.715871 | orchestrator | 2026-02-16 03:34:14 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-16 03:34:33.715917 | orchestrator | 2026-02-16 03:34:14 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-16 03:34:33.715937 | orchestrator | 2026-02-16 03:34:14 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-16 03:34:33.715956 | orchestrator | 2026-02-16 03:34:14 | INFO  | Variable preparation completed 2026-02-16 03:34:33.715975 | orchestrator | 2026-02-16 03:34:15 | INFO  | Starting inventory overwrite handling 2026-02-16 03:34:33.715994 | orchestrator | 2026-02-16 03:34:15 | INFO  | Handling group overwrites in 99-overwrite 2026-02-16 03:34:33.716012 | orchestrator | 2026-02-16 03:34:15 | INFO  | Removing group frr:children from 60-generic 2026-02-16 03:34:33.716031 | orchestrator | 2026-02-16 03:34:15 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-16 03:34:33.716050 | orchestrator | 2026-02-16 03:34:15 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-16 03:34:33.716069 | orchestrator | 2026-02-16 03:34:15 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-16 03:34:33.716089 | orchestrator | 2026-02-16 03:34:15 | INFO  | Handling group overwrites in 20-roles 2026-02-16 03:34:33.716107 | orchestrator | 2026-02-16 03:34:15 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-16 03:34:33.716126 | orchestrator | 2026-02-16 03:34:15 | INFO  | Removed 5 group(s) in total 2026-02-16 03:34:33.716143 | orchestrator | 2026-02-16 03:34:15 | INFO  | Inventory overwrite handling completed 2026-02-16 03:34:33.716161 | orchestrator | 2026-02-16 03:34:16 | INFO  | Starting merge of inventory files 2026-02-16 03:34:33.716177 | orchestrator | 2026-02-16 03:34:16 | INFO  | Inventory files merged successfully 2026-02-16 03:34:33.716195 | orchestrator | 2026-02-16 03:34:21 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-16 03:34:33.716214 | orchestrator | 2026-02-16 03:34:32 | INFO  | Successfully wrote ClusterShell configuration 2026-02-16 03:34:33.716234 | orchestrator | [master f9a0cf5] 2026-02-16-03-34 2026-02-16 03:34:33.716255 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-16 03:34:36.029806 | orchestrator | 2026-02-16 03:34:36 | INFO  | Task 60bafd60-487e-4b53-b3db-475e0e575756 (ceph-create-lvm-devices) was prepared for execution. 2026-02-16 03:34:36.029906 | orchestrator | 2026-02-16 03:34:36 | INFO  | It takes a moment until task 60bafd60-487e-4b53-b3db-475e0e575756 (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-16 03:34:47.558107 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-16 03:34:47.558220 | orchestrator | 2.16.14 2026-02-16 03:34:47.558239 | orchestrator | 2026-02-16 03:34:47.558256 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-16 03:34:47.558272 | orchestrator | 2026-02-16 03:34:47.558288 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-16 03:34:47.558303 | orchestrator | Monday 16 February 2026 03:34:40 +0000 (0:00:00.297) 0:00:00.297 ******* 2026-02-16 03:34:47.558318 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-16 03:34:47.558333 | orchestrator | 2026-02-16 03:34:47.558346 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-16 03:34:47.558360 | orchestrator | Monday 16 February 2026 03:34:40 +0000 (0:00:00.268) 0:00:00.565 ******* 2026-02-16 03:34:47.558373 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:34:47.558387 | orchestrator | 2026-02-16 03:34:47.558402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:34:47.558416 | orchestrator | Monday 16 February 2026 03:34:40 +0000 (0:00:00.232) 0:00:00.798 ******* 2026-02-16 03:34:47.558431 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-16 03:34:47.558474 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-16 03:34:47.558550 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-16 03:34:47.558564 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-16 03:34:47.558592 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-16 03:34:47.558607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-16 03:34:47.558621 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-16 03:34:47.558636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-16 03:34:47.558651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-16 03:34:47.558666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-16 03:34:47.558681 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-16 03:34:47.558696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-16 03:34:47.558711 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-16 03:34:47.558725 | orchestrator | 2026-02-16 03:34:47.558741 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:34:47.558756 | orchestrator | Monday 16 February 2026 03:34:41 +0000 (0:00:00.503) 0:00:01.302 ******* 2026-02-16 03:34:47.558770 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:47.558786 | orchestrator | 2026-02-16 03:34:47.558801 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:34:47.558816 | orchestrator | Monday 16 February 2026 03:34:41 +0000 (0:00:00.197) 0:00:01.500 ******* 2026-02-16 03:34:47.558831 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:47.558846 | orchestrator | 2026-02-16 03:34:47.558861 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:34:47.558876 | orchestrator | Monday 16 February 2026 03:34:41 +0000 (0:00:00.207) 0:00:01.707 ******* 2026-02-16 03:34:47.558890 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:47.558903 | orchestrator | 2026-02-16 03:34:47.558917 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:34:47.558930 | orchestrator | Monday 16 February 2026 03:34:41 +0000 (0:00:00.202) 0:00:01.910 ******* 2026-02-16 03:34:47.558944 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:47.558957 | orchestrator | 2026-02-16 03:34:47.558971 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:34:47.558984 | orchestrator | Monday 16 February 2026 03:34:42 +0000 (0:00:00.209) 0:00:02.119 ******* 2026-02-16 03:34:47.558998 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:47.559012 | orchestrator | 2026-02-16 03:34:47.559026 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:34:47.559039 | orchestrator | Monday 16 February 2026 03:34:42 +0000 (0:00:00.213) 0:00:02.333 ******* 2026-02-16 03:34:47.559053 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:47.559066 | orchestrator | 2026-02-16 03:34:47.559081 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:34:47.559094 | orchestrator | Monday 16 February 2026 03:34:42 +0000 (0:00:00.198) 0:00:02.531 ******* 2026-02-16 03:34:47.559108 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:47.559121 | orchestrator | 2026-02-16 03:34:47.559135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:34:47.559148 | orchestrator | Monday 16 February 2026 03:34:42 +0000 (0:00:00.213) 0:00:02.745 ******* 2026-02-16 03:34:47.559161 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:47.559175 | orchestrator | 2026-02-16 03:34:47.559189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:34:47.559212 | orchestrator | Monday 16 February 2026 03:34:42 +0000 (0:00:00.196) 0:00:02.942 ******* 2026-02-16 03:34:47.559225 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5) 2026-02-16 03:34:47.559241 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5) 2026-02-16 03:34:47.559255 | orchestrator | 2026-02-16 03:34:47.559269 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:34:47.559302 | orchestrator | Monday 16 February 2026 03:34:43 +0000 (0:00:00.428) 0:00:03.371 ******* 2026-02-16 03:34:47.559316 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0693774e-e893-4e7b-949f-071f2326db51) 2026-02-16 03:34:47.559330 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0693774e-e893-4e7b-949f-071f2326db51) 2026-02-16 03:34:47.559343 | orchestrator | 2026-02-16 03:34:47.559357 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:34:47.559371 | orchestrator | Monday 16 February 2026 03:34:43 +0000 (0:00:00.606) 0:00:03.977 ******* 2026-02-16 03:34:47.559384 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_51f5f49d-415a-48de-982e-531dff143e5e) 2026-02-16 03:34:47.559398 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_51f5f49d-415a-48de-982e-531dff143e5e) 2026-02-16 03:34:47.559411 | orchestrator | 2026-02-16 03:34:47.559424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:34:47.559438 | orchestrator | Monday 16 February 2026 03:34:44 +0000 (0:00:00.632) 0:00:04.609 ******* 2026-02-16 03:34:47.559452 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_843bc551-f5ad-4319-82ad-d411f9295fd2) 2026-02-16 03:34:47.559466 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_843bc551-f5ad-4319-82ad-d411f9295fd2) 2026-02-16 03:34:47.559523 | orchestrator | 2026-02-16 03:34:47.559538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:34:47.559552 | orchestrator | Monday 16 February 2026 03:34:45 +0000 (0:00:00.829) 0:00:05.439 ******* 2026-02-16 03:34:47.559565 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-16 03:34:47.559579 | orchestrator | 2026-02-16 03:34:47.559598 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:34:47.559611 | orchestrator | Monday 16 February 2026 03:34:45 +0000 (0:00:00.332) 0:00:05.771 ******* 2026-02-16 03:34:47.559625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-16 03:34:47.559638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-16 03:34:47.559652 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-16 03:34:47.559665 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-16 03:34:47.559679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-16 03:34:47.559692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-16 03:34:47.559706 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-16 03:34:47.559719 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-16 03:34:47.559733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-16 03:34:47.559746 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-16 03:34:47.559759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-16 03:34:47.559773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-16 03:34:47.559795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-16 03:34:47.559808 | orchestrator | 2026-02-16 03:34:47.559821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:34:47.559833 | orchestrator | Monday 16 February 2026 03:34:46 +0000 (0:00:00.413) 0:00:06.185 ******* 2026-02-16 03:34:47.559847 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:47.559861 | orchestrator | 2026-02-16 03:34:47.559875 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:34:47.559888 | orchestrator | Monday 16 February 2026 03:34:46 +0000 (0:00:00.202) 0:00:06.387 ******* 2026-02-16 03:34:47.559902 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:47.559915 | orchestrator | 2026-02-16 03:34:47.559928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:34:47.559941 | orchestrator | Monday 16 February 2026 03:34:46 +0000 (0:00:00.207) 0:00:06.595 ******* 2026-02-16 03:34:47.559953 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:47.559965 | orchestrator | 2026-02-16 03:34:47.559977 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:34:47.559991 | orchestrator | Monday 16 February 2026 03:34:46 +0000 (0:00:00.208) 0:00:06.803 ******* 2026-02-16 03:34:47.560005 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:47.560017 | orchestrator | 2026-02-16 03:34:47.560029 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:34:47.560043 | orchestrator | Monday 16 February 2026 03:34:46 +0000 (0:00:00.191) 0:00:06.995 ******* 2026-02-16 03:34:47.560056 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:47.560070 | orchestrator | 2026-02-16 03:34:47.560084 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:34:47.560097 | orchestrator | Monday 16 February 2026 03:34:47 +0000 (0:00:00.194) 0:00:07.190 ******* 2026-02-16 03:34:47.560111 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:47.560124 | orchestrator | 2026-02-16 03:34:47.560138 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:34:47.560151 | orchestrator | Monday 16 February 2026 03:34:47 +0000 (0:00:00.199) 0:00:07.389 ******* 2026-02-16 03:34:47.560165 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:47.560178 | orchestrator | 2026-02-16 03:34:47.560201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:34:55.621899 | orchestrator | Monday 16 February 2026 03:34:47 +0000 (0:00:00.204) 0:00:07.594 ******* 2026-02-16 03:34:55.621999 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:55.622011 | orchestrator | 2026-02-16 03:34:55.622074 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:34:55.622083 | orchestrator | Monday 16 February 2026 03:34:48 +0000 (0:00:00.618) 0:00:08.212 ******* 2026-02-16 03:34:55.622092 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-16 03:34:55.622101 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-16 03:34:55.622109 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-16 03:34:55.622118 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-16 03:34:55.622126 | orchestrator | 2026-02-16 03:34:55.622134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:34:55.622142 | orchestrator | Monday 16 February 2026 03:34:48 +0000 (0:00:00.656) 0:00:08.869 ******* 2026-02-16 03:34:55.622150 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:55.622158 | orchestrator | 2026-02-16 03:34:55.622166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:34:55.622174 | orchestrator | Monday 16 February 2026 03:34:49 +0000 (0:00:00.204) 0:00:09.074 ******* 2026-02-16 03:34:55.622182 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:55.622190 | orchestrator | 2026-02-16 03:34:55.622198 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:34:55.622206 | orchestrator | Monday 16 February 2026 03:34:49 +0000 (0:00:00.204) 0:00:09.278 ******* 2026-02-16 03:34:55.622233 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:55.622242 | orchestrator | 2026-02-16 03:34:55.622249 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:34:55.622269 | orchestrator | Monday 16 February 2026 03:34:49 +0000 (0:00:00.223) 0:00:09.501 ******* 2026-02-16 03:34:55.622277 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:55.622285 | orchestrator | 2026-02-16 03:34:55.622293 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-16 03:34:55.622301 | orchestrator | Monday 16 February 2026 03:34:49 +0000 (0:00:00.201) 0:00:09.702 ******* 2026-02-16 03:34:55.622309 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:55.622316 | orchestrator | 2026-02-16 03:34:55.622324 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-16 03:34:55.622332 | orchestrator | Monday 16 February 2026 03:34:49 +0000 (0:00:00.137) 0:00:09.840 ******* 2026-02-16 03:34:55.622340 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2f9a42f5-b575-5e11-9555-a5550e2fae1e'}}) 2026-02-16 03:34:55.622349 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '50d7a967-e09e-512a-aa83-aa9bbdf9ab74'}}) 2026-02-16 03:34:55.622357 | orchestrator | 2026-02-16 03:34:55.622365 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-16 03:34:55.622373 | orchestrator | Monday 16 February 2026 03:34:49 +0000 (0:00:00.210) 0:00:10.051 ******* 2026-02-16 03:34:55.622392 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'}) 2026-02-16 03:34:55.622401 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'}) 2026-02-16 03:34:55.622410 | orchestrator | 2026-02-16 03:34:55.622418 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-16 03:34:55.622426 | orchestrator | Monday 16 February 2026 03:34:52 +0000 (0:00:02.017) 0:00:12.069 ******* 2026-02-16 03:34:55.622434 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 03:34:55.622443 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 03:34:55.622453 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:55.622461 | orchestrator | 2026-02-16 03:34:55.622470 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-16 03:34:55.622479 | orchestrator | Monday 16 February 2026 03:34:52 +0000 (0:00:00.150) 0:00:12.219 ******* 2026-02-16 03:34:55.622506 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'}) 2026-02-16 03:34:55.622517 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'}) 2026-02-16 03:34:55.622526 | orchestrator | 2026-02-16 03:34:55.622536 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-16 03:34:55.622545 | orchestrator | Monday 16 February 2026 03:34:53 +0000 (0:00:01.516) 0:00:13.735 ******* 2026-02-16 03:34:55.622554 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 03:34:55.622563 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 03:34:55.622572 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:55.622581 | orchestrator | 2026-02-16 03:34:55.622590 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-16 03:34:55.622599 | orchestrator | Monday 16 February 2026 03:34:53 +0000 (0:00:00.147) 0:00:13.883 ******* 2026-02-16 03:34:55.622631 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:55.622641 | orchestrator | 2026-02-16 03:34:55.622650 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-16 03:34:55.622659 | orchestrator | Monday 16 February 2026 03:34:54 +0000 (0:00:00.313) 0:00:14.196 ******* 2026-02-16 03:34:55.622668 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 03:34:55.622677 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 03:34:55.622686 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:55.622695 | orchestrator | 2026-02-16 03:34:55.622705 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-16 03:34:55.622714 | orchestrator | Monday 16 February 2026 03:34:54 +0000 (0:00:00.155) 0:00:14.351 ******* 2026-02-16 03:34:55.622723 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:55.622732 | orchestrator | 2026-02-16 03:34:55.622741 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-16 03:34:55.622750 | orchestrator | Monday 16 February 2026 03:34:54 +0000 (0:00:00.141) 0:00:14.493 ******* 2026-02-16 03:34:55.622759 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 03:34:55.622772 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 03:34:55.622782 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:55.622791 | orchestrator | 2026-02-16 03:34:55.622800 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-16 03:34:55.622808 | orchestrator | Monday 16 February 2026 03:34:54 +0000 (0:00:00.142) 0:00:14.635 ******* 2026-02-16 03:34:55.622816 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:55.622824 | orchestrator | 2026-02-16 03:34:55.622832 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-16 03:34:55.622840 | orchestrator | Monday 16 February 2026 03:34:54 +0000 (0:00:00.136) 0:00:14.772 ******* 2026-02-16 03:34:55.622848 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 03:34:55.622856 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 03:34:55.622864 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:55.622872 | orchestrator | 2026-02-16 03:34:55.622879 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-16 03:34:55.622887 | orchestrator | Monday 16 February 2026 03:34:54 +0000 (0:00:00.155) 0:00:14.928 ******* 2026-02-16 03:34:55.622895 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:34:55.622903 | orchestrator | 2026-02-16 03:34:55.622911 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-16 03:34:55.622919 | orchestrator | Monday 16 February 2026 03:34:55 +0000 (0:00:00.141) 0:00:15.070 ******* 2026-02-16 03:34:55.622927 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 03:34:55.622935 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 03:34:55.622943 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:55.622951 | orchestrator | 2026-02-16 03:34:55.622959 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-16 03:34:55.622967 | orchestrator | Monday 16 February 2026 03:34:55 +0000 (0:00:00.151) 0:00:15.222 ******* 2026-02-16 03:34:55.622980 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 03:34:55.622988 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 03:34:55.622996 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:55.623004 | orchestrator | 2026-02-16 03:34:55.623012 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-16 03:34:55.623020 | orchestrator | Monday 16 February 2026 03:34:55 +0000 (0:00:00.150) 0:00:15.372 ******* 2026-02-16 03:34:55.623028 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 03:34:55.623036 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 03:34:55.623044 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:55.623052 | orchestrator | 2026-02-16 03:34:55.623059 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-16 03:34:55.623067 | orchestrator | Monday 16 February 2026 03:34:55 +0000 (0:00:00.151) 0:00:15.524 ******* 2026-02-16 03:34:55.623075 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:34:55.623083 | orchestrator | 2026-02-16 03:34:55.623091 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-16 03:34:55.623104 | orchestrator | Monday 16 February 2026 03:34:55 +0000 (0:00:00.136) 0:00:15.660 ******* 2026-02-16 03:35:01.971572 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.971687 | orchestrator | 2026-02-16 03:35:01.971702 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-16 03:35:01.971711 | orchestrator | Monday 16 February 2026 03:34:55 +0000 (0:00:00.139) 0:00:15.800 ******* 2026-02-16 03:35:01.971717 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.971724 | orchestrator | 2026-02-16 03:35:01.971731 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-16 03:35:01.971738 | orchestrator | Monday 16 February 2026 03:34:56 +0000 (0:00:00.324) 0:00:16.124 ******* 2026-02-16 03:35:01.971749 | orchestrator | ok: [testbed-node-3] => { 2026-02-16 03:35:01.971760 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-16 03:35:01.971770 | orchestrator | } 2026-02-16 03:35:01.971781 | orchestrator | 2026-02-16 03:35:01.971791 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-16 03:35:01.971801 | orchestrator | Monday 16 February 2026 03:34:56 +0000 (0:00:00.140) 0:00:16.264 ******* 2026-02-16 03:35:01.971812 | orchestrator | ok: [testbed-node-3] => { 2026-02-16 03:35:01.971822 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-16 03:35:01.971838 | orchestrator | } 2026-02-16 03:35:01.971849 | orchestrator | 2026-02-16 03:35:01.971858 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-16 03:35:01.971865 | orchestrator | Monday 16 February 2026 03:34:56 +0000 (0:00:00.161) 0:00:16.426 ******* 2026-02-16 03:35:01.971872 | orchestrator | ok: [testbed-node-3] => { 2026-02-16 03:35:01.971880 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-16 03:35:01.971890 | orchestrator | } 2026-02-16 03:35:01.971902 | orchestrator | 2026-02-16 03:35:01.971912 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-16 03:35:01.971923 | orchestrator | Monday 16 February 2026 03:34:56 +0000 (0:00:00.141) 0:00:16.567 ******* 2026-02-16 03:35:01.971930 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:35:01.971936 | orchestrator | 2026-02-16 03:35:01.971943 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-16 03:35:01.971949 | orchestrator | Monday 16 February 2026 03:34:57 +0000 (0:00:00.630) 0:00:17.197 ******* 2026-02-16 03:35:01.971956 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:35:01.971981 | orchestrator | 2026-02-16 03:35:01.971988 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-16 03:35:01.971994 | orchestrator | Monday 16 February 2026 03:34:57 +0000 (0:00:00.531) 0:00:17.729 ******* 2026-02-16 03:35:01.972000 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:35:01.972008 | orchestrator | 2026-02-16 03:35:01.972018 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-16 03:35:01.972029 | orchestrator | Monday 16 February 2026 03:34:58 +0000 (0:00:00.518) 0:00:18.248 ******* 2026-02-16 03:35:01.972039 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:35:01.972050 | orchestrator | 2026-02-16 03:35:01.972061 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-16 03:35:01.972070 | orchestrator | Monday 16 February 2026 03:34:58 +0000 (0:00:00.141) 0:00:18.390 ******* 2026-02-16 03:35:01.972077 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972085 | orchestrator | 2026-02-16 03:35:01.972092 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-16 03:35:01.972099 | orchestrator | Monday 16 February 2026 03:34:58 +0000 (0:00:00.121) 0:00:18.511 ******* 2026-02-16 03:35:01.972107 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972114 | orchestrator | 2026-02-16 03:35:01.972121 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-16 03:35:01.972128 | orchestrator | Monday 16 February 2026 03:34:58 +0000 (0:00:00.119) 0:00:18.631 ******* 2026-02-16 03:35:01.972135 | orchestrator | ok: [testbed-node-3] => { 2026-02-16 03:35:01.972143 | orchestrator |  "vgs_report": { 2026-02-16 03:35:01.972150 | orchestrator |  "vg": [] 2026-02-16 03:35:01.972158 | orchestrator |  } 2026-02-16 03:35:01.972165 | orchestrator | } 2026-02-16 03:35:01.972172 | orchestrator | 2026-02-16 03:35:01.972179 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-16 03:35:01.972186 | orchestrator | Monday 16 February 2026 03:34:58 +0000 (0:00:00.137) 0:00:18.768 ******* 2026-02-16 03:35:01.972193 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972201 | orchestrator | 2026-02-16 03:35:01.972208 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-16 03:35:01.972216 | orchestrator | Monday 16 February 2026 03:34:58 +0000 (0:00:00.125) 0:00:18.894 ******* 2026-02-16 03:35:01.972223 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972230 | orchestrator | 2026-02-16 03:35:01.972238 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-16 03:35:01.972245 | orchestrator | Monday 16 February 2026 03:34:59 +0000 (0:00:00.353) 0:00:19.247 ******* 2026-02-16 03:35:01.972252 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972259 | orchestrator | 2026-02-16 03:35:01.972266 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-16 03:35:01.972273 | orchestrator | Monday 16 February 2026 03:34:59 +0000 (0:00:00.139) 0:00:19.386 ******* 2026-02-16 03:35:01.972280 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972287 | orchestrator | 2026-02-16 03:35:01.972294 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-16 03:35:01.972302 | orchestrator | Monday 16 February 2026 03:34:59 +0000 (0:00:00.140) 0:00:19.527 ******* 2026-02-16 03:35:01.972309 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972316 | orchestrator | 2026-02-16 03:35:01.972324 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-16 03:35:01.972332 | orchestrator | Monday 16 February 2026 03:34:59 +0000 (0:00:00.155) 0:00:19.682 ******* 2026-02-16 03:35:01.972339 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972346 | orchestrator | 2026-02-16 03:35:01.972353 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-16 03:35:01.972360 | orchestrator | Monday 16 February 2026 03:34:59 +0000 (0:00:00.133) 0:00:19.815 ******* 2026-02-16 03:35:01.972366 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972374 | orchestrator | 2026-02-16 03:35:01.972381 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-16 03:35:01.972394 | orchestrator | Monday 16 February 2026 03:34:59 +0000 (0:00:00.136) 0:00:19.952 ******* 2026-02-16 03:35:01.972417 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972425 | orchestrator | 2026-02-16 03:35:01.972433 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-16 03:35:01.972440 | orchestrator | Monday 16 February 2026 03:35:00 +0000 (0:00:00.133) 0:00:20.086 ******* 2026-02-16 03:35:01.972447 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972453 | orchestrator | 2026-02-16 03:35:01.972460 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-16 03:35:01.972468 | orchestrator | Monday 16 February 2026 03:35:00 +0000 (0:00:00.131) 0:00:20.217 ******* 2026-02-16 03:35:01.972478 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972488 | orchestrator | 2026-02-16 03:35:01.972515 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-16 03:35:01.972527 | orchestrator | Monday 16 February 2026 03:35:00 +0000 (0:00:00.135) 0:00:20.353 ******* 2026-02-16 03:35:01.972536 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972543 | orchestrator | 2026-02-16 03:35:01.972549 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-16 03:35:01.972555 | orchestrator | Monday 16 February 2026 03:35:00 +0000 (0:00:00.136) 0:00:20.489 ******* 2026-02-16 03:35:01.972561 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972567 | orchestrator | 2026-02-16 03:35:01.972574 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-16 03:35:01.972580 | orchestrator | Monday 16 February 2026 03:35:00 +0000 (0:00:00.155) 0:00:20.644 ******* 2026-02-16 03:35:01.972586 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972592 | orchestrator | 2026-02-16 03:35:01.972598 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-16 03:35:01.972645 | orchestrator | Monday 16 February 2026 03:35:00 +0000 (0:00:00.132) 0:00:20.777 ******* 2026-02-16 03:35:01.972652 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972659 | orchestrator | 2026-02-16 03:35:01.972665 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-16 03:35:01.972671 | orchestrator | Monday 16 February 2026 03:35:01 +0000 (0:00:00.327) 0:00:21.105 ******* 2026-02-16 03:35:01.972678 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 03:35:01.972686 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 03:35:01.972693 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972699 | orchestrator | 2026-02-16 03:35:01.972705 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-16 03:35:01.972711 | orchestrator | Monday 16 February 2026 03:35:01 +0000 (0:00:00.148) 0:00:21.254 ******* 2026-02-16 03:35:01.972718 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 03:35:01.972724 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 03:35:01.972730 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972737 | orchestrator | 2026-02-16 03:35:01.972743 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-16 03:35:01.972749 | orchestrator | Monday 16 February 2026 03:35:01 +0000 (0:00:00.149) 0:00:21.403 ******* 2026-02-16 03:35:01.972755 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 03:35:01.972762 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 03:35:01.972775 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972781 | orchestrator | 2026-02-16 03:35:01.972787 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-16 03:35:01.972793 | orchestrator | Monday 16 February 2026 03:35:01 +0000 (0:00:00.156) 0:00:21.560 ******* 2026-02-16 03:35:01.972799 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 03:35:01.972806 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 03:35:01.972812 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972818 | orchestrator | 2026-02-16 03:35:01.972824 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-16 03:35:01.972830 | orchestrator | Monday 16 February 2026 03:35:01 +0000 (0:00:00.160) 0:00:21.720 ******* 2026-02-16 03:35:01.972837 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 03:35:01.972843 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 03:35:01.972849 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:01.972855 | orchestrator | 2026-02-16 03:35:01.972861 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-16 03:35:01.972868 | orchestrator | Monday 16 February 2026 03:35:01 +0000 (0:00:00.146) 0:00:21.867 ******* 2026-02-16 03:35:01.972879 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 03:35:07.331086 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 03:35:07.331224 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:07.331251 | orchestrator | 2026-02-16 03:35:07.331273 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-16 03:35:07.331294 | orchestrator | Monday 16 February 2026 03:35:01 +0000 (0:00:00.144) 0:00:22.011 ******* 2026-02-16 03:35:07.331314 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 03:35:07.331333 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 03:35:07.331354 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:07.331366 | orchestrator | 2026-02-16 03:35:07.331378 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-16 03:35:07.331389 | orchestrator | Monday 16 February 2026 03:35:02 +0000 (0:00:00.184) 0:00:22.196 ******* 2026-02-16 03:35:07.331400 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 03:35:07.331428 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 03:35:07.331440 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:07.331451 | orchestrator | 2026-02-16 03:35:07.331462 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-16 03:35:07.331473 | orchestrator | Monday 16 February 2026 03:35:02 +0000 (0:00:00.157) 0:00:22.353 ******* 2026-02-16 03:35:07.331484 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:35:07.331497 | orchestrator | 2026-02-16 03:35:07.331581 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-16 03:35:07.331617 | orchestrator | Monday 16 February 2026 03:35:02 +0000 (0:00:00.540) 0:00:22.893 ******* 2026-02-16 03:35:07.331630 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:35:07.331643 | orchestrator | 2026-02-16 03:35:07.331655 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-16 03:35:07.331667 | orchestrator | Monday 16 February 2026 03:35:03 +0000 (0:00:00.524) 0:00:23.417 ******* 2026-02-16 03:35:07.331680 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:35:07.331693 | orchestrator | 2026-02-16 03:35:07.331705 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-16 03:35:07.331718 | orchestrator | Monday 16 February 2026 03:35:03 +0000 (0:00:00.164) 0:00:23.582 ******* 2026-02-16 03:35:07.331732 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'vg_name': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'}) 2026-02-16 03:35:07.331746 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'vg_name': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'}) 2026-02-16 03:35:07.331759 | orchestrator | 2026-02-16 03:35:07.331772 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-16 03:35:07.331784 | orchestrator | Monday 16 February 2026 03:35:03 +0000 (0:00:00.167) 0:00:23.750 ******* 2026-02-16 03:35:07.331797 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 03:35:07.331810 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 03:35:07.331822 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:07.331835 | orchestrator | 2026-02-16 03:35:07.331847 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-16 03:35:07.331860 | orchestrator | Monday 16 February 2026 03:35:04 +0000 (0:00:00.422) 0:00:24.172 ******* 2026-02-16 03:35:07.331872 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 03:35:07.331885 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 03:35:07.331897 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:07.331910 | orchestrator | 2026-02-16 03:35:07.331923 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-16 03:35:07.331936 | orchestrator | Monday 16 February 2026 03:35:04 +0000 (0:00:00.171) 0:00:24.343 ******* 2026-02-16 03:35:07.331949 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 03:35:07.331962 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 03:35:07.331974 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:35:07.331987 | orchestrator | 2026-02-16 03:35:07.331997 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-16 03:35:07.332008 | orchestrator | Monday 16 February 2026 03:35:04 +0000 (0:00:00.164) 0:00:24.508 ******* 2026-02-16 03:35:07.332041 | orchestrator | ok: [testbed-node-3] => { 2026-02-16 03:35:07.332053 | orchestrator |  "lvm_report": { 2026-02-16 03:35:07.332064 | orchestrator |  "lv": [ 2026-02-16 03:35:07.332076 | orchestrator |  { 2026-02-16 03:35:07.332087 | orchestrator |  "lv_name": "osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e", 2026-02-16 03:35:07.332099 | orchestrator |  "vg_name": "ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e" 2026-02-16 03:35:07.332110 | orchestrator |  }, 2026-02-16 03:35:07.332121 | orchestrator |  { 2026-02-16 03:35:07.332132 | orchestrator |  "lv_name": "osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74", 2026-02-16 03:35:07.332150 | orchestrator |  "vg_name": "ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74" 2026-02-16 03:35:07.332162 | orchestrator |  } 2026-02-16 03:35:07.332173 | orchestrator |  ], 2026-02-16 03:35:07.332184 | orchestrator |  "pv": [ 2026-02-16 03:35:07.332194 | orchestrator |  { 2026-02-16 03:35:07.332206 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-16 03:35:07.332217 | orchestrator |  "vg_name": "ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e" 2026-02-16 03:35:07.332228 | orchestrator |  }, 2026-02-16 03:35:07.332238 | orchestrator |  { 2026-02-16 03:35:07.332249 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-16 03:35:07.332260 | orchestrator |  "vg_name": "ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74" 2026-02-16 03:35:07.332272 | orchestrator |  } 2026-02-16 03:35:07.332283 | orchestrator |  ] 2026-02-16 03:35:07.332294 | orchestrator |  } 2026-02-16 03:35:07.332305 | orchestrator | } 2026-02-16 03:35:07.332316 | orchestrator | 2026-02-16 03:35:07.332334 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-16 03:35:07.332345 | orchestrator | 2026-02-16 03:35:07.332356 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-16 03:35:07.332368 | orchestrator | Monday 16 February 2026 03:35:04 +0000 (0:00:00.307) 0:00:24.815 ******* 2026-02-16 03:35:07.332379 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-16 03:35:07.332390 | orchestrator | 2026-02-16 03:35:07.332401 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-16 03:35:07.332412 | orchestrator | Monday 16 February 2026 03:35:05 +0000 (0:00:00.251) 0:00:25.066 ******* 2026-02-16 03:35:07.332423 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:35:07.332434 | orchestrator | 2026-02-16 03:35:07.332445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:07.332456 | orchestrator | Monday 16 February 2026 03:35:05 +0000 (0:00:00.229) 0:00:25.296 ******* 2026-02-16 03:35:07.332467 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-16 03:35:07.332478 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-16 03:35:07.332488 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-16 03:35:07.332499 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-16 03:35:07.332545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-16 03:35:07.332557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-16 03:35:07.332568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-16 03:35:07.332579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-16 03:35:07.332592 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-16 03:35:07.332610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-16 03:35:07.332640 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-16 03:35:07.332659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-16 03:35:07.332677 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-16 03:35:07.332694 | orchestrator | 2026-02-16 03:35:07.332712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:07.332728 | orchestrator | Monday 16 February 2026 03:35:05 +0000 (0:00:00.421) 0:00:25.718 ******* 2026-02-16 03:35:07.332743 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:07.332758 | orchestrator | 2026-02-16 03:35:07.332774 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:07.332803 | orchestrator | Monday 16 February 2026 03:35:05 +0000 (0:00:00.207) 0:00:25.926 ******* 2026-02-16 03:35:07.332820 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:07.332837 | orchestrator | 2026-02-16 03:35:07.332855 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:07.332873 | orchestrator | Monday 16 February 2026 03:35:06 +0000 (0:00:00.629) 0:00:26.555 ******* 2026-02-16 03:35:07.332890 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:07.332908 | orchestrator | 2026-02-16 03:35:07.332925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:07.332945 | orchestrator | Monday 16 February 2026 03:35:06 +0000 (0:00:00.215) 0:00:26.772 ******* 2026-02-16 03:35:07.332964 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:07.332982 | orchestrator | 2026-02-16 03:35:07.332999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:07.333010 | orchestrator | Monday 16 February 2026 03:35:06 +0000 (0:00:00.197) 0:00:26.969 ******* 2026-02-16 03:35:07.333020 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:07.333031 | orchestrator | 2026-02-16 03:35:07.333042 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:07.333053 | orchestrator | Monday 16 February 2026 03:35:07 +0000 (0:00:00.202) 0:00:27.171 ******* 2026-02-16 03:35:07.333063 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:07.333074 | orchestrator | 2026-02-16 03:35:07.333097 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:18.363251 | orchestrator | Monday 16 February 2026 03:35:07 +0000 (0:00:00.195) 0:00:27.367 ******* 2026-02-16 03:35:18.363374 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:18.363391 | orchestrator | 2026-02-16 03:35:18.363405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:18.363417 | orchestrator | Monday 16 February 2026 03:35:07 +0000 (0:00:00.213) 0:00:27.581 ******* 2026-02-16 03:35:18.363429 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:18.363440 | orchestrator | 2026-02-16 03:35:18.363452 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:18.363463 | orchestrator | Monday 16 February 2026 03:35:07 +0000 (0:00:00.211) 0:00:27.793 ******* 2026-02-16 03:35:18.363474 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29) 2026-02-16 03:35:18.363486 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29) 2026-02-16 03:35:18.363498 | orchestrator | 2026-02-16 03:35:18.363509 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:18.363558 | orchestrator | Monday 16 February 2026 03:35:08 +0000 (0:00:00.445) 0:00:28.239 ******* 2026-02-16 03:35:18.363590 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_769208b9-3be0-45fa-bf10-39ffe30cf829) 2026-02-16 03:35:18.363629 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_769208b9-3be0-45fa-bf10-39ffe30cf829) 2026-02-16 03:35:18.363648 | orchestrator | 2026-02-16 03:35:18.363666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:18.363684 | orchestrator | Monday 16 February 2026 03:35:08 +0000 (0:00:00.471) 0:00:28.710 ******* 2026-02-16 03:35:18.363703 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0857a7ec-98e0-4b3a-95cc-40567a4f4a8e) 2026-02-16 03:35:18.363720 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0857a7ec-98e0-4b3a-95cc-40567a4f4a8e) 2026-02-16 03:35:18.363737 | orchestrator | 2026-02-16 03:35:18.363755 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:18.363773 | orchestrator | Monday 16 February 2026 03:35:09 +0000 (0:00:00.678) 0:00:29.388 ******* 2026-02-16 03:35:18.363791 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_57ea9400-2602-4802-b9b7-802a488f4705) 2026-02-16 03:35:18.363809 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_57ea9400-2602-4802-b9b7-802a488f4705) 2026-02-16 03:35:18.363858 | orchestrator | 2026-02-16 03:35:18.363878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:18.363895 | orchestrator | Monday 16 February 2026 03:35:10 +0000 (0:00:00.872) 0:00:30.261 ******* 2026-02-16 03:35:18.363913 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-16 03:35:18.363930 | orchestrator | 2026-02-16 03:35:18.363947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:18.363964 | orchestrator | Monday 16 February 2026 03:35:10 +0000 (0:00:00.352) 0:00:30.614 ******* 2026-02-16 03:35:18.363980 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-16 03:35:18.363999 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-16 03:35:18.364018 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-16 03:35:18.364036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-16 03:35:18.364055 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-16 03:35:18.364072 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-16 03:35:18.364090 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-16 03:35:18.364102 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-16 03:35:18.364112 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-16 03:35:18.364124 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-16 03:35:18.364142 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-16 03:35:18.364166 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-16 03:35:18.364190 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-16 03:35:18.364207 | orchestrator | 2026-02-16 03:35:18.364224 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:18.364241 | orchestrator | Monday 16 February 2026 03:35:11 +0000 (0:00:00.479) 0:00:31.094 ******* 2026-02-16 03:35:18.364258 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:18.364276 | orchestrator | 2026-02-16 03:35:18.364295 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:18.364313 | orchestrator | Monday 16 February 2026 03:35:11 +0000 (0:00:00.210) 0:00:31.304 ******* 2026-02-16 03:35:18.364331 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:18.364346 | orchestrator | 2026-02-16 03:35:18.364357 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:18.364368 | orchestrator | Monday 16 February 2026 03:35:11 +0000 (0:00:00.207) 0:00:31.512 ******* 2026-02-16 03:35:18.364379 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:18.364390 | orchestrator | 2026-02-16 03:35:18.364424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:18.364436 | orchestrator | Monday 16 February 2026 03:35:11 +0000 (0:00:00.207) 0:00:31.719 ******* 2026-02-16 03:35:18.364446 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:18.364457 | orchestrator | 2026-02-16 03:35:18.364468 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:18.364479 | orchestrator | Monday 16 February 2026 03:35:11 +0000 (0:00:00.202) 0:00:31.922 ******* 2026-02-16 03:35:18.364490 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:18.364501 | orchestrator | 2026-02-16 03:35:18.364512 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:18.364555 | orchestrator | Monday 16 February 2026 03:35:12 +0000 (0:00:00.233) 0:00:32.155 ******* 2026-02-16 03:35:18.364579 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:18.364590 | orchestrator | 2026-02-16 03:35:18.364602 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:18.364613 | orchestrator | Monday 16 February 2026 03:35:12 +0000 (0:00:00.198) 0:00:32.354 ******* 2026-02-16 03:35:18.364624 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:18.364635 | orchestrator | 2026-02-16 03:35:18.364646 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:18.364657 | orchestrator | Monday 16 February 2026 03:35:12 +0000 (0:00:00.207) 0:00:32.562 ******* 2026-02-16 03:35:18.364668 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:18.364678 | orchestrator | 2026-02-16 03:35:18.364697 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:18.364709 | orchestrator | Monday 16 February 2026 03:35:13 +0000 (0:00:00.628) 0:00:33.190 ******* 2026-02-16 03:35:18.364720 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-16 03:35:18.364730 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-16 03:35:18.364742 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-16 03:35:18.364752 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-16 03:35:18.364763 | orchestrator | 2026-02-16 03:35:18.364774 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:18.364785 | orchestrator | Monday 16 February 2026 03:35:13 +0000 (0:00:00.673) 0:00:33.863 ******* 2026-02-16 03:35:18.364796 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:18.364807 | orchestrator | 2026-02-16 03:35:18.364821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:18.364840 | orchestrator | Monday 16 February 2026 03:35:14 +0000 (0:00:00.210) 0:00:34.073 ******* 2026-02-16 03:35:18.364857 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:18.364875 | orchestrator | 2026-02-16 03:35:18.364893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:18.364910 | orchestrator | Monday 16 February 2026 03:35:14 +0000 (0:00:00.216) 0:00:34.290 ******* 2026-02-16 03:35:18.364925 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:18.364940 | orchestrator | 2026-02-16 03:35:18.364957 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:18.364976 | orchestrator | Monday 16 February 2026 03:35:14 +0000 (0:00:00.212) 0:00:34.502 ******* 2026-02-16 03:35:18.364993 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:18.365013 | orchestrator | 2026-02-16 03:35:18.365032 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-16 03:35:18.365049 | orchestrator | Monday 16 February 2026 03:35:14 +0000 (0:00:00.209) 0:00:34.711 ******* 2026-02-16 03:35:18.365065 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:18.365076 | orchestrator | 2026-02-16 03:35:18.365087 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-16 03:35:18.365098 | orchestrator | Monday 16 February 2026 03:35:14 +0000 (0:00:00.141) 0:00:34.852 ******* 2026-02-16 03:35:18.365109 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'}}) 2026-02-16 03:35:18.365121 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3ec6a818-dc71-5cb4-ac47-83f209d09bca'}}) 2026-02-16 03:35:18.365131 | orchestrator | 2026-02-16 03:35:18.365142 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-16 03:35:18.365153 | orchestrator | Monday 16 February 2026 03:35:14 +0000 (0:00:00.191) 0:00:35.044 ******* 2026-02-16 03:35:18.365165 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'}) 2026-02-16 03:35:18.365178 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'}) 2026-02-16 03:35:18.365198 | orchestrator | 2026-02-16 03:35:18.365209 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-16 03:35:18.365220 | orchestrator | Monday 16 February 2026 03:35:16 +0000 (0:00:01.861) 0:00:36.905 ******* 2026-02-16 03:35:18.365231 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 03:35:18.365243 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 03:35:18.365254 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:18.365265 | orchestrator | 2026-02-16 03:35:18.365275 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-16 03:35:18.365286 | orchestrator | Monday 16 February 2026 03:35:17 +0000 (0:00:00.155) 0:00:37.061 ******* 2026-02-16 03:35:18.365297 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'}) 2026-02-16 03:35:18.365319 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'}) 2026-02-16 03:35:24.205362 | orchestrator | 2026-02-16 03:35:24.205486 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-16 03:35:24.205504 | orchestrator | Monday 16 February 2026 03:35:18 +0000 (0:00:01.332) 0:00:38.393 ******* 2026-02-16 03:35:24.205517 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 03:35:24.205621 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 03:35:24.205641 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:24.205659 | orchestrator | 2026-02-16 03:35:24.205672 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-16 03:35:24.205683 | orchestrator | Monday 16 February 2026 03:35:18 +0000 (0:00:00.377) 0:00:38.770 ******* 2026-02-16 03:35:24.205694 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:24.205705 | orchestrator | 2026-02-16 03:35:24.205716 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-16 03:35:24.205745 | orchestrator | Monday 16 February 2026 03:35:18 +0000 (0:00:00.144) 0:00:38.915 ******* 2026-02-16 03:35:24.205757 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 03:35:24.205768 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 03:35:24.205779 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:24.205790 | orchestrator | 2026-02-16 03:35:24.205801 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-16 03:35:24.205812 | orchestrator | Monday 16 February 2026 03:35:19 +0000 (0:00:00.164) 0:00:39.080 ******* 2026-02-16 03:35:24.205823 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:24.205834 | orchestrator | 2026-02-16 03:35:24.205847 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-16 03:35:24.205859 | orchestrator | Monday 16 February 2026 03:35:19 +0000 (0:00:00.149) 0:00:39.229 ******* 2026-02-16 03:35:24.205873 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 03:35:24.205885 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 03:35:24.205898 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:24.205934 | orchestrator | 2026-02-16 03:35:24.205948 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-16 03:35:24.205960 | orchestrator | Monday 16 February 2026 03:35:19 +0000 (0:00:00.170) 0:00:39.400 ******* 2026-02-16 03:35:24.205972 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:24.205983 | orchestrator | 2026-02-16 03:35:24.205994 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-16 03:35:24.206005 | orchestrator | Monday 16 February 2026 03:35:19 +0000 (0:00:00.149) 0:00:39.550 ******* 2026-02-16 03:35:24.206073 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 03:35:24.206085 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 03:35:24.206097 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:24.206108 | orchestrator | 2026-02-16 03:35:24.206118 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-16 03:35:24.206129 | orchestrator | Monday 16 February 2026 03:35:19 +0000 (0:00:00.159) 0:00:39.710 ******* 2026-02-16 03:35:24.206140 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:35:24.206152 | orchestrator | 2026-02-16 03:35:24.206163 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-16 03:35:24.206174 | orchestrator | Monday 16 February 2026 03:35:19 +0000 (0:00:00.151) 0:00:39.861 ******* 2026-02-16 03:35:24.206185 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 03:35:24.206196 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 03:35:24.206207 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:24.206218 | orchestrator | 2026-02-16 03:35:24.206229 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-16 03:35:24.206240 | orchestrator | Monday 16 February 2026 03:35:19 +0000 (0:00:00.151) 0:00:40.012 ******* 2026-02-16 03:35:24.206250 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 03:35:24.206261 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 03:35:24.206272 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:24.206283 | orchestrator | 2026-02-16 03:35:24.206294 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-16 03:35:24.206323 | orchestrator | Monday 16 February 2026 03:35:20 +0000 (0:00:00.183) 0:00:40.196 ******* 2026-02-16 03:35:24.206335 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 03:35:24.206346 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 03:35:24.206357 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:24.206368 | orchestrator | 2026-02-16 03:35:24.206379 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-16 03:35:24.206390 | orchestrator | Monday 16 February 2026 03:35:20 +0000 (0:00:00.155) 0:00:40.352 ******* 2026-02-16 03:35:24.206401 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:24.206412 | orchestrator | 2026-02-16 03:35:24.206423 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-16 03:35:24.206434 | orchestrator | Monday 16 February 2026 03:35:20 +0000 (0:00:00.343) 0:00:40.696 ******* 2026-02-16 03:35:24.206444 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:24.206455 | orchestrator | 2026-02-16 03:35:24.206486 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-16 03:35:24.206498 | orchestrator | Monday 16 February 2026 03:35:20 +0000 (0:00:00.144) 0:00:40.840 ******* 2026-02-16 03:35:24.206509 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:24.206520 | orchestrator | 2026-02-16 03:35:24.206601 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-16 03:35:24.206612 | orchestrator | Monday 16 February 2026 03:35:20 +0000 (0:00:00.135) 0:00:40.976 ******* 2026-02-16 03:35:24.206623 | orchestrator | ok: [testbed-node-4] => { 2026-02-16 03:35:24.206634 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-16 03:35:24.206645 | orchestrator | } 2026-02-16 03:35:24.206656 | orchestrator | 2026-02-16 03:35:24.206667 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-16 03:35:24.206678 | orchestrator | Monday 16 February 2026 03:35:21 +0000 (0:00:00.157) 0:00:41.133 ******* 2026-02-16 03:35:24.206689 | orchestrator | ok: [testbed-node-4] => { 2026-02-16 03:35:24.206700 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-16 03:35:24.206711 | orchestrator | } 2026-02-16 03:35:24.206722 | orchestrator | 2026-02-16 03:35:24.206733 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-16 03:35:24.206744 | orchestrator | Monday 16 February 2026 03:35:21 +0000 (0:00:00.147) 0:00:41.280 ******* 2026-02-16 03:35:24.206755 | orchestrator | ok: [testbed-node-4] => { 2026-02-16 03:35:24.206765 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-16 03:35:24.206776 | orchestrator | } 2026-02-16 03:35:24.206787 | orchestrator | 2026-02-16 03:35:24.206798 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-16 03:35:24.206809 | orchestrator | Monday 16 February 2026 03:35:21 +0000 (0:00:00.142) 0:00:41.422 ******* 2026-02-16 03:35:24.206819 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:35:24.206830 | orchestrator | 2026-02-16 03:35:24.206841 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-16 03:35:24.206852 | orchestrator | Monday 16 February 2026 03:35:21 +0000 (0:00:00.533) 0:00:41.956 ******* 2026-02-16 03:35:24.206862 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:35:24.206873 | orchestrator | 2026-02-16 03:35:24.206884 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-16 03:35:24.206895 | orchestrator | Monday 16 February 2026 03:35:22 +0000 (0:00:00.519) 0:00:42.475 ******* 2026-02-16 03:35:24.206906 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:35:24.206916 | orchestrator | 2026-02-16 03:35:24.206927 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-16 03:35:24.206938 | orchestrator | Monday 16 February 2026 03:35:22 +0000 (0:00:00.512) 0:00:42.988 ******* 2026-02-16 03:35:24.206949 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:35:24.206959 | orchestrator | 2026-02-16 03:35:24.206970 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-16 03:35:24.206981 | orchestrator | Monday 16 February 2026 03:35:23 +0000 (0:00:00.149) 0:00:43.138 ******* 2026-02-16 03:35:24.206992 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:24.207003 | orchestrator | 2026-02-16 03:35:24.207014 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-16 03:35:24.207024 | orchestrator | Monday 16 February 2026 03:35:23 +0000 (0:00:00.112) 0:00:43.250 ******* 2026-02-16 03:35:24.207035 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:24.207046 | orchestrator | 2026-02-16 03:35:24.207057 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-16 03:35:24.207068 | orchestrator | Monday 16 February 2026 03:35:23 +0000 (0:00:00.313) 0:00:43.564 ******* 2026-02-16 03:35:24.207078 | orchestrator | ok: [testbed-node-4] => { 2026-02-16 03:35:24.207089 | orchestrator |  "vgs_report": { 2026-02-16 03:35:24.207101 | orchestrator |  "vg": [] 2026-02-16 03:35:24.207112 | orchestrator |  } 2026-02-16 03:35:24.207123 | orchestrator | } 2026-02-16 03:35:24.207134 | orchestrator | 2026-02-16 03:35:24.207145 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-16 03:35:24.207164 | orchestrator | Monday 16 February 2026 03:35:23 +0000 (0:00:00.146) 0:00:43.710 ******* 2026-02-16 03:35:24.207176 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:24.207186 | orchestrator | 2026-02-16 03:35:24.207197 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-16 03:35:24.207208 | orchestrator | Monday 16 February 2026 03:35:23 +0000 (0:00:00.129) 0:00:43.839 ******* 2026-02-16 03:35:24.207219 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:24.207229 | orchestrator | 2026-02-16 03:35:24.207240 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-16 03:35:24.207251 | orchestrator | Monday 16 February 2026 03:35:23 +0000 (0:00:00.132) 0:00:43.972 ******* 2026-02-16 03:35:24.207262 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:24.207273 | orchestrator | 2026-02-16 03:35:24.207284 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-16 03:35:24.207295 | orchestrator | Monday 16 February 2026 03:35:24 +0000 (0:00:00.143) 0:00:44.115 ******* 2026-02-16 03:35:24.207306 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:24.207317 | orchestrator | 2026-02-16 03:35:24.207336 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-16 03:35:28.932494 | orchestrator | Monday 16 February 2026 03:35:24 +0000 (0:00:00.126) 0:00:44.242 ******* 2026-02-16 03:35:28.932661 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:28.932677 | orchestrator | 2026-02-16 03:35:28.932688 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-16 03:35:28.932699 | orchestrator | Monday 16 February 2026 03:35:24 +0000 (0:00:00.145) 0:00:44.388 ******* 2026-02-16 03:35:28.932709 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:28.932719 | orchestrator | 2026-02-16 03:35:28.932729 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-16 03:35:28.932739 | orchestrator | Monday 16 February 2026 03:35:24 +0000 (0:00:00.157) 0:00:44.545 ******* 2026-02-16 03:35:28.932749 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:28.932758 | orchestrator | 2026-02-16 03:35:28.932768 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-16 03:35:28.932778 | orchestrator | Monday 16 February 2026 03:35:24 +0000 (0:00:00.136) 0:00:44.681 ******* 2026-02-16 03:35:28.932787 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:28.932797 | orchestrator | 2026-02-16 03:35:28.932806 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-16 03:35:28.932832 | orchestrator | Monday 16 February 2026 03:35:24 +0000 (0:00:00.127) 0:00:44.809 ******* 2026-02-16 03:35:28.932842 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:28.932852 | orchestrator | 2026-02-16 03:35:28.932862 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-16 03:35:28.932871 | orchestrator | Monday 16 February 2026 03:35:24 +0000 (0:00:00.121) 0:00:44.931 ******* 2026-02-16 03:35:28.932881 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:28.932890 | orchestrator | 2026-02-16 03:35:28.932900 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-16 03:35:28.932909 | orchestrator | Monday 16 February 2026 03:35:25 +0000 (0:00:00.336) 0:00:45.267 ******* 2026-02-16 03:35:28.932919 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:28.932929 | orchestrator | 2026-02-16 03:35:28.932939 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-16 03:35:28.932949 | orchestrator | Monday 16 February 2026 03:35:25 +0000 (0:00:00.140) 0:00:45.408 ******* 2026-02-16 03:35:28.932958 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:28.932967 | orchestrator | 2026-02-16 03:35:28.932977 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-16 03:35:28.932987 | orchestrator | Monday 16 February 2026 03:35:25 +0000 (0:00:00.149) 0:00:45.557 ******* 2026-02-16 03:35:28.932996 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:28.933006 | orchestrator | 2026-02-16 03:35:28.933015 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-16 03:35:28.933046 | orchestrator | Monday 16 February 2026 03:35:25 +0000 (0:00:00.139) 0:00:45.697 ******* 2026-02-16 03:35:28.933056 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:28.933065 | orchestrator | 2026-02-16 03:35:28.933075 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-16 03:35:28.933084 | orchestrator | Monday 16 February 2026 03:35:25 +0000 (0:00:00.146) 0:00:45.843 ******* 2026-02-16 03:35:28.933095 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 03:35:28.933107 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 03:35:28.933116 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:28.933126 | orchestrator | 2026-02-16 03:35:28.933136 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-16 03:35:28.933145 | orchestrator | Monday 16 February 2026 03:35:25 +0000 (0:00:00.163) 0:00:46.006 ******* 2026-02-16 03:35:28.933155 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 03:35:28.933164 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 03:35:28.933174 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:28.933184 | orchestrator | 2026-02-16 03:35:28.933193 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-16 03:35:28.933203 | orchestrator | Monday 16 February 2026 03:35:26 +0000 (0:00:00.146) 0:00:46.153 ******* 2026-02-16 03:35:28.933212 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 03:35:28.933222 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 03:35:28.933231 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:28.933241 | orchestrator | 2026-02-16 03:35:28.933250 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-16 03:35:28.933260 | orchestrator | Monday 16 February 2026 03:35:26 +0000 (0:00:00.165) 0:00:46.319 ******* 2026-02-16 03:35:28.933270 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 03:35:28.933280 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 03:35:28.933290 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:28.933299 | orchestrator | 2026-02-16 03:35:28.933325 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-16 03:35:28.933335 | orchestrator | Monday 16 February 2026 03:35:26 +0000 (0:00:00.162) 0:00:46.481 ******* 2026-02-16 03:35:28.933345 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 03:35:28.933355 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 03:35:28.933364 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:28.933374 | orchestrator | 2026-02-16 03:35:28.933383 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-16 03:35:28.933393 | orchestrator | Monday 16 February 2026 03:35:26 +0000 (0:00:00.158) 0:00:46.640 ******* 2026-02-16 03:35:28.933403 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 03:35:28.933424 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 03:35:28.933434 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:28.933444 | orchestrator | 2026-02-16 03:35:28.933454 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-16 03:35:28.933463 | orchestrator | Monday 16 February 2026 03:35:26 +0000 (0:00:00.152) 0:00:46.793 ******* 2026-02-16 03:35:28.933473 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 03:35:28.933483 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 03:35:28.933493 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:28.933502 | orchestrator | 2026-02-16 03:35:28.933512 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-16 03:35:28.933521 | orchestrator | Monday 16 February 2026 03:35:27 +0000 (0:00:00.353) 0:00:47.146 ******* 2026-02-16 03:35:28.933549 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 03:35:28.933559 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 03:35:28.933569 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:28.933578 | orchestrator | 2026-02-16 03:35:28.933588 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-16 03:35:28.933598 | orchestrator | Monday 16 February 2026 03:35:27 +0000 (0:00:00.152) 0:00:47.298 ******* 2026-02-16 03:35:28.933607 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:35:28.933617 | orchestrator | 2026-02-16 03:35:28.933627 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-16 03:35:28.933636 | orchestrator | Monday 16 February 2026 03:35:27 +0000 (0:00:00.510) 0:00:47.809 ******* 2026-02-16 03:35:28.933646 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:35:28.933656 | orchestrator | 2026-02-16 03:35:28.933665 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-16 03:35:28.933675 | orchestrator | Monday 16 February 2026 03:35:28 +0000 (0:00:00.525) 0:00:48.334 ******* 2026-02-16 03:35:28.933684 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:35:28.933694 | orchestrator | 2026-02-16 03:35:28.933704 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-16 03:35:28.933713 | orchestrator | Monday 16 February 2026 03:35:28 +0000 (0:00:00.144) 0:00:48.478 ******* 2026-02-16 03:35:28.933723 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'vg_name': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'}) 2026-02-16 03:35:28.933734 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'vg_name': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'}) 2026-02-16 03:35:28.933743 | orchestrator | 2026-02-16 03:35:28.933753 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-16 03:35:28.933763 | orchestrator | Monday 16 February 2026 03:35:28 +0000 (0:00:00.170) 0:00:48.649 ******* 2026-02-16 03:35:28.933773 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 03:35:28.933782 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 03:35:28.933792 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:28.933802 | orchestrator | 2026-02-16 03:35:28.933811 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-16 03:35:28.933827 | orchestrator | Monday 16 February 2026 03:35:28 +0000 (0:00:00.156) 0:00:48.806 ******* 2026-02-16 03:35:28.933836 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 03:35:28.933853 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 03:35:35.406266 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:35.406380 | orchestrator | 2026-02-16 03:35:35.406395 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-16 03:35:35.406407 | orchestrator | Monday 16 February 2026 03:35:28 +0000 (0:00:00.165) 0:00:48.971 ******* 2026-02-16 03:35:35.406418 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 03:35:35.406429 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 03:35:35.406439 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:35:35.406449 | orchestrator | 2026-02-16 03:35:35.406459 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-16 03:35:35.406469 | orchestrator | Monday 16 February 2026 03:35:29 +0000 (0:00:00.157) 0:00:49.129 ******* 2026-02-16 03:35:35.406492 | orchestrator | ok: [testbed-node-4] => { 2026-02-16 03:35:35.406503 | orchestrator |  "lvm_report": { 2026-02-16 03:35:35.406514 | orchestrator |  "lv": [ 2026-02-16 03:35:35.406524 | orchestrator |  { 2026-02-16 03:35:35.406556 | orchestrator |  "lv_name": "osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca", 2026-02-16 03:35:35.406568 | orchestrator |  "vg_name": "ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca" 2026-02-16 03:35:35.406578 | orchestrator |  }, 2026-02-16 03:35:35.406588 | orchestrator |  { 2026-02-16 03:35:35.406598 | orchestrator |  "lv_name": "osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d", 2026-02-16 03:35:35.406608 | orchestrator |  "vg_name": "ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d" 2026-02-16 03:35:35.406617 | orchestrator |  } 2026-02-16 03:35:35.406627 | orchestrator |  ], 2026-02-16 03:35:35.406637 | orchestrator |  "pv": [ 2026-02-16 03:35:35.406646 | orchestrator |  { 2026-02-16 03:35:35.406656 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-16 03:35:35.406666 | orchestrator |  "vg_name": "ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d" 2026-02-16 03:35:35.406676 | orchestrator |  }, 2026-02-16 03:35:35.406685 | orchestrator |  { 2026-02-16 03:35:35.406695 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-16 03:35:35.406705 | orchestrator |  "vg_name": "ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca" 2026-02-16 03:35:35.406715 | orchestrator |  } 2026-02-16 03:35:35.406725 | orchestrator |  ] 2026-02-16 03:35:35.406734 | orchestrator |  } 2026-02-16 03:35:35.406744 | orchestrator | } 2026-02-16 03:35:35.406754 | orchestrator | 2026-02-16 03:35:35.406764 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-16 03:35:35.406775 | orchestrator | 2026-02-16 03:35:35.406786 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-16 03:35:35.406797 | orchestrator | Monday 16 February 2026 03:35:29 +0000 (0:00:00.283) 0:00:49.412 ******* 2026-02-16 03:35:35.406808 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-16 03:35:35.406819 | orchestrator | 2026-02-16 03:35:35.406829 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-16 03:35:35.406846 | orchestrator | Monday 16 February 2026 03:35:30 +0000 (0:00:00.648) 0:00:50.061 ******* 2026-02-16 03:35:35.406865 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:35:35.406916 | orchestrator | 2026-02-16 03:35:35.406934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:35.406950 | orchestrator | Monday 16 February 2026 03:35:30 +0000 (0:00:00.248) 0:00:50.309 ******* 2026-02-16 03:35:35.406966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-16 03:35:35.406983 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-16 03:35:35.407000 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-16 03:35:35.407015 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-16 03:35:35.407032 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-16 03:35:35.407049 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-16 03:35:35.407061 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-16 03:35:35.407072 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-16 03:35:35.407082 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-16 03:35:35.407093 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-16 03:35:35.407104 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-16 03:35:35.407115 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-16 03:35:35.407126 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-16 03:35:35.407135 | orchestrator | 2026-02-16 03:35:35.407145 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:35.407155 | orchestrator | Monday 16 February 2026 03:35:30 +0000 (0:00:00.410) 0:00:50.719 ******* 2026-02-16 03:35:35.407164 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:35.407174 | orchestrator | 2026-02-16 03:35:35.407183 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:35.407193 | orchestrator | Monday 16 February 2026 03:35:30 +0000 (0:00:00.210) 0:00:50.929 ******* 2026-02-16 03:35:35.407202 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:35.407212 | orchestrator | 2026-02-16 03:35:35.407221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:35.407248 | orchestrator | Monday 16 February 2026 03:35:31 +0000 (0:00:00.207) 0:00:51.137 ******* 2026-02-16 03:35:35.407258 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:35.407268 | orchestrator | 2026-02-16 03:35:35.407278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:35.407287 | orchestrator | Monday 16 February 2026 03:35:31 +0000 (0:00:00.199) 0:00:51.336 ******* 2026-02-16 03:35:35.407296 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:35.407306 | orchestrator | 2026-02-16 03:35:35.407316 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:35.407325 | orchestrator | Monday 16 February 2026 03:35:31 +0000 (0:00:00.203) 0:00:51.540 ******* 2026-02-16 03:35:35.407335 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:35.407344 | orchestrator | 2026-02-16 03:35:35.407354 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:35.407364 | orchestrator | Monday 16 February 2026 03:35:31 +0000 (0:00:00.197) 0:00:51.738 ******* 2026-02-16 03:35:35.407373 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:35.407383 | orchestrator | 2026-02-16 03:35:35.407400 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:35.407410 | orchestrator | Monday 16 February 2026 03:35:31 +0000 (0:00:00.198) 0:00:51.936 ******* 2026-02-16 03:35:35.407419 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:35.407429 | orchestrator | 2026-02-16 03:35:35.407438 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:35.407456 | orchestrator | Monday 16 February 2026 03:35:32 +0000 (0:00:00.242) 0:00:52.178 ******* 2026-02-16 03:35:35.407466 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:35.407476 | orchestrator | 2026-02-16 03:35:35.407485 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:35.407495 | orchestrator | Monday 16 February 2026 03:35:32 +0000 (0:00:00.653) 0:00:52.832 ******* 2026-02-16 03:35:35.407504 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d) 2026-02-16 03:35:35.407515 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d) 2026-02-16 03:35:35.407525 | orchestrator | 2026-02-16 03:35:35.407555 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:35.407565 | orchestrator | Monday 16 February 2026 03:35:33 +0000 (0:00:00.431) 0:00:53.263 ******* 2026-02-16 03:35:35.407574 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_864a7dfe-a330-4dca-9b66-49ca9e8841e5) 2026-02-16 03:35:35.407584 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_864a7dfe-a330-4dca-9b66-49ca9e8841e5) 2026-02-16 03:35:35.407594 | orchestrator | 2026-02-16 03:35:35.407603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:35.407613 | orchestrator | Monday 16 February 2026 03:35:33 +0000 (0:00:00.448) 0:00:53.712 ******* 2026-02-16 03:35:35.407623 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_560fea90-96fc-4e98-a264-4fc86723b569) 2026-02-16 03:35:35.407632 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_560fea90-96fc-4e98-a264-4fc86723b569) 2026-02-16 03:35:35.407642 | orchestrator | 2026-02-16 03:35:35.407652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:35.407661 | orchestrator | Monday 16 February 2026 03:35:34 +0000 (0:00:00.456) 0:00:54.169 ******* 2026-02-16 03:35:35.407671 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_22f5929b-f2f1-4a02-b80c-ace7dc1afd6d) 2026-02-16 03:35:35.407680 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_22f5929b-f2f1-4a02-b80c-ace7dc1afd6d) 2026-02-16 03:35:35.407690 | orchestrator | 2026-02-16 03:35:35.407700 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-16 03:35:35.407709 | orchestrator | Monday 16 February 2026 03:35:34 +0000 (0:00:00.506) 0:00:54.676 ******* 2026-02-16 03:35:35.407719 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-16 03:35:35.407729 | orchestrator | 2026-02-16 03:35:35.407738 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:35.407748 | orchestrator | Monday 16 February 2026 03:35:34 +0000 (0:00:00.333) 0:00:55.009 ******* 2026-02-16 03:35:35.407757 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-16 03:35:35.407767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-16 03:35:35.407776 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-16 03:35:35.407786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-16 03:35:35.407795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-16 03:35:35.407805 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-16 03:35:35.407814 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-16 03:35:35.407824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-16 03:35:35.407833 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-16 03:35:35.407843 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-16 03:35:35.407858 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-16 03:35:35.407874 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-16 03:35:44.202771 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-16 03:35:44.202887 | orchestrator | 2026-02-16 03:35:44.202903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:44.202915 | orchestrator | Monday 16 February 2026 03:35:35 +0000 (0:00:00.425) 0:00:55.435 ******* 2026-02-16 03:35:44.202927 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.202939 | orchestrator | 2026-02-16 03:35:44.202951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:44.202962 | orchestrator | Monday 16 February 2026 03:35:35 +0000 (0:00:00.187) 0:00:55.622 ******* 2026-02-16 03:35:44.202973 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.202984 | orchestrator | 2026-02-16 03:35:44.202995 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:44.203006 | orchestrator | Monday 16 February 2026 03:35:35 +0000 (0:00:00.207) 0:00:55.830 ******* 2026-02-16 03:35:44.203032 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.203044 | orchestrator | 2026-02-16 03:35:44.203055 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:44.203066 | orchestrator | Monday 16 February 2026 03:35:35 +0000 (0:00:00.210) 0:00:56.040 ******* 2026-02-16 03:35:44.203077 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.203088 | orchestrator | 2026-02-16 03:35:44.203099 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:44.203110 | orchestrator | Monday 16 February 2026 03:35:36 +0000 (0:00:00.196) 0:00:56.236 ******* 2026-02-16 03:35:44.203121 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.203132 | orchestrator | 2026-02-16 03:35:44.203142 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:44.203153 | orchestrator | Monday 16 February 2026 03:35:36 +0000 (0:00:00.597) 0:00:56.834 ******* 2026-02-16 03:35:44.203164 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.203175 | orchestrator | 2026-02-16 03:35:44.203186 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:44.203197 | orchestrator | Monday 16 February 2026 03:35:36 +0000 (0:00:00.214) 0:00:57.048 ******* 2026-02-16 03:35:44.203208 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.203218 | orchestrator | 2026-02-16 03:35:44.203229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:44.203240 | orchestrator | Monday 16 February 2026 03:35:37 +0000 (0:00:00.213) 0:00:57.261 ******* 2026-02-16 03:35:44.203251 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.203262 | orchestrator | 2026-02-16 03:35:44.203273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:44.203284 | orchestrator | Monday 16 February 2026 03:35:37 +0000 (0:00:00.206) 0:00:57.468 ******* 2026-02-16 03:35:44.203295 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-16 03:35:44.203308 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-16 03:35:44.203320 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-16 03:35:44.203333 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-16 03:35:44.203346 | orchestrator | 2026-02-16 03:35:44.203358 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:44.203370 | orchestrator | Monday 16 February 2026 03:35:38 +0000 (0:00:00.649) 0:00:58.117 ******* 2026-02-16 03:35:44.203383 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.203395 | orchestrator | 2026-02-16 03:35:44.203408 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:44.203420 | orchestrator | Monday 16 February 2026 03:35:38 +0000 (0:00:00.200) 0:00:58.318 ******* 2026-02-16 03:35:44.203454 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.203467 | orchestrator | 2026-02-16 03:35:44.203479 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:44.203493 | orchestrator | Monday 16 February 2026 03:35:38 +0000 (0:00:00.202) 0:00:58.520 ******* 2026-02-16 03:35:44.203505 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.203517 | orchestrator | 2026-02-16 03:35:44.203529 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-16 03:35:44.203541 | orchestrator | Monday 16 February 2026 03:35:38 +0000 (0:00:00.194) 0:00:58.714 ******* 2026-02-16 03:35:44.203576 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.203588 | orchestrator | 2026-02-16 03:35:44.203601 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-16 03:35:44.203614 | orchestrator | Monday 16 February 2026 03:35:38 +0000 (0:00:00.202) 0:00:58.917 ******* 2026-02-16 03:35:44.203626 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.203639 | orchestrator | 2026-02-16 03:35:44.203651 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-16 03:35:44.203663 | orchestrator | Monday 16 February 2026 03:35:39 +0000 (0:00:00.140) 0:00:59.057 ******* 2026-02-16 03:35:44.203674 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '10a0662d-59e9-5a43-af5c-1b6d671b7fa5'}}) 2026-02-16 03:35:44.203686 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f418f421-cc32-53ce-b421-39353fe37c02'}}) 2026-02-16 03:35:44.203697 | orchestrator | 2026-02-16 03:35:44.203709 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-16 03:35:44.203720 | orchestrator | Monday 16 February 2026 03:35:39 +0000 (0:00:00.204) 0:00:59.262 ******* 2026-02-16 03:35:44.203731 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'}) 2026-02-16 03:35:44.203744 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'}) 2026-02-16 03:35:44.203755 | orchestrator | 2026-02-16 03:35:44.203766 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-16 03:35:44.203794 | orchestrator | Monday 16 February 2026 03:35:41 +0000 (0:00:01.893) 0:01:01.156 ******* 2026-02-16 03:35:44.203806 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 03:35:44.203819 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 03:35:44.203830 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.203841 | orchestrator | 2026-02-16 03:35:44.203852 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-16 03:35:44.203863 | orchestrator | Monday 16 February 2026 03:35:41 +0000 (0:00:00.363) 0:01:01.520 ******* 2026-02-16 03:35:44.203879 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'}) 2026-02-16 03:35:44.203891 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'}) 2026-02-16 03:35:44.203902 | orchestrator | 2026-02-16 03:35:44.203913 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-16 03:35:44.203923 | orchestrator | Monday 16 February 2026 03:35:42 +0000 (0:00:01.371) 0:01:02.891 ******* 2026-02-16 03:35:44.203934 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 03:35:44.203945 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 03:35:44.203964 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.203975 | orchestrator | 2026-02-16 03:35:44.203986 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-16 03:35:44.203997 | orchestrator | Monday 16 February 2026 03:35:42 +0000 (0:00:00.156) 0:01:03.048 ******* 2026-02-16 03:35:44.204008 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.204018 | orchestrator | 2026-02-16 03:35:44.204029 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-16 03:35:44.204040 | orchestrator | Monday 16 February 2026 03:35:43 +0000 (0:00:00.134) 0:01:03.182 ******* 2026-02-16 03:35:44.204051 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 03:35:44.204062 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 03:35:44.204077 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.204088 | orchestrator | 2026-02-16 03:35:44.204099 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-16 03:35:44.204110 | orchestrator | Monday 16 February 2026 03:35:43 +0000 (0:00:00.153) 0:01:03.336 ******* 2026-02-16 03:35:44.204121 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.204132 | orchestrator | 2026-02-16 03:35:44.204143 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-16 03:35:44.204154 | orchestrator | Monday 16 February 2026 03:35:43 +0000 (0:00:00.139) 0:01:03.476 ******* 2026-02-16 03:35:44.204165 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 03:35:44.204176 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 03:35:44.204187 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.204198 | orchestrator | 2026-02-16 03:35:44.204209 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-16 03:35:44.204220 | orchestrator | Monday 16 February 2026 03:35:43 +0000 (0:00:00.172) 0:01:03.648 ******* 2026-02-16 03:35:44.204231 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.204241 | orchestrator | 2026-02-16 03:35:44.204253 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-16 03:35:44.204263 | orchestrator | Monday 16 February 2026 03:35:43 +0000 (0:00:00.141) 0:01:03.790 ******* 2026-02-16 03:35:44.204274 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 03:35:44.204285 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 03:35:44.204297 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:44.204308 | orchestrator | 2026-02-16 03:35:44.204319 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-16 03:35:44.204330 | orchestrator | Monday 16 February 2026 03:35:43 +0000 (0:00:00.154) 0:01:03.945 ******* 2026-02-16 03:35:44.204340 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:35:44.204351 | orchestrator | 2026-02-16 03:35:44.204363 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-16 03:35:44.204373 | orchestrator | Monday 16 February 2026 03:35:44 +0000 (0:00:00.141) 0:01:04.086 ******* 2026-02-16 03:35:44.204391 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 03:35:50.536738 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 03:35:50.536843 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.536852 | orchestrator | 2026-02-16 03:35:50.536859 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-16 03:35:50.536866 | orchestrator | Monday 16 February 2026 03:35:44 +0000 (0:00:00.154) 0:01:04.240 ******* 2026-02-16 03:35:50.536871 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 03:35:50.536888 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 03:35:50.536893 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.536898 | orchestrator | 2026-02-16 03:35:50.536904 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-16 03:35:50.536909 | orchestrator | Monday 16 February 2026 03:35:44 +0000 (0:00:00.159) 0:01:04.400 ******* 2026-02-16 03:35:50.536914 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 03:35:50.536919 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 03:35:50.536924 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.536929 | orchestrator | 2026-02-16 03:35:50.536934 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-16 03:35:50.536939 | orchestrator | Monday 16 February 2026 03:35:44 +0000 (0:00:00.352) 0:01:04.753 ******* 2026-02-16 03:35:50.536945 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.536950 | orchestrator | 2026-02-16 03:35:50.536955 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-16 03:35:50.536960 | orchestrator | Monday 16 February 2026 03:35:44 +0000 (0:00:00.137) 0:01:04.891 ******* 2026-02-16 03:35:50.536965 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.536970 | orchestrator | 2026-02-16 03:35:50.536975 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-16 03:35:50.536980 | orchestrator | Monday 16 February 2026 03:35:44 +0000 (0:00:00.138) 0:01:05.030 ******* 2026-02-16 03:35:50.536986 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.536991 | orchestrator | 2026-02-16 03:35:50.536996 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-16 03:35:50.537002 | orchestrator | Monday 16 February 2026 03:35:45 +0000 (0:00:00.138) 0:01:05.168 ******* 2026-02-16 03:35:50.537007 | orchestrator | ok: [testbed-node-5] => { 2026-02-16 03:35:50.537012 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-16 03:35:50.537018 | orchestrator | } 2026-02-16 03:35:50.537023 | orchestrator | 2026-02-16 03:35:50.537028 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-16 03:35:50.537033 | orchestrator | Monday 16 February 2026 03:35:45 +0000 (0:00:00.143) 0:01:05.312 ******* 2026-02-16 03:35:50.537038 | orchestrator | ok: [testbed-node-5] => { 2026-02-16 03:35:50.537043 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-16 03:35:50.537048 | orchestrator | } 2026-02-16 03:35:50.537053 | orchestrator | 2026-02-16 03:35:50.537059 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-16 03:35:50.537064 | orchestrator | Monday 16 February 2026 03:35:45 +0000 (0:00:00.148) 0:01:05.461 ******* 2026-02-16 03:35:50.537069 | orchestrator | ok: [testbed-node-5] => { 2026-02-16 03:35:50.537074 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-16 03:35:50.537079 | orchestrator | } 2026-02-16 03:35:50.537084 | orchestrator | 2026-02-16 03:35:50.537089 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-16 03:35:50.537094 | orchestrator | Monday 16 February 2026 03:35:45 +0000 (0:00:00.144) 0:01:05.605 ******* 2026-02-16 03:35:50.537099 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:35:50.537108 | orchestrator | 2026-02-16 03:35:50.537113 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-16 03:35:50.537119 | orchestrator | Monday 16 February 2026 03:35:46 +0000 (0:00:00.545) 0:01:06.151 ******* 2026-02-16 03:35:50.537124 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:35:50.537129 | orchestrator | 2026-02-16 03:35:50.537134 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-16 03:35:50.537139 | orchestrator | Monday 16 February 2026 03:35:46 +0000 (0:00:00.539) 0:01:06.691 ******* 2026-02-16 03:35:50.537144 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:35:50.537149 | orchestrator | 2026-02-16 03:35:50.537154 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-16 03:35:50.537159 | orchestrator | Monday 16 February 2026 03:35:47 +0000 (0:00:00.521) 0:01:07.212 ******* 2026-02-16 03:35:50.537164 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:35:50.537169 | orchestrator | 2026-02-16 03:35:50.537174 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-16 03:35:50.537180 | orchestrator | Monday 16 February 2026 03:35:47 +0000 (0:00:00.146) 0:01:07.359 ******* 2026-02-16 03:35:50.537185 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.537190 | orchestrator | 2026-02-16 03:35:50.537195 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-16 03:35:50.537200 | orchestrator | Monday 16 February 2026 03:35:47 +0000 (0:00:00.114) 0:01:07.473 ******* 2026-02-16 03:35:50.537205 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.537210 | orchestrator | 2026-02-16 03:35:50.537215 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-16 03:35:50.537220 | orchestrator | Monday 16 February 2026 03:35:47 +0000 (0:00:00.321) 0:01:07.795 ******* 2026-02-16 03:35:50.537225 | orchestrator | ok: [testbed-node-5] => { 2026-02-16 03:35:50.537231 | orchestrator |  "vgs_report": { 2026-02-16 03:35:50.537236 | orchestrator |  "vg": [] 2026-02-16 03:35:50.537251 | orchestrator |  } 2026-02-16 03:35:50.537257 | orchestrator | } 2026-02-16 03:35:50.537262 | orchestrator | 2026-02-16 03:35:50.537268 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-16 03:35:50.537273 | orchestrator | Monday 16 February 2026 03:35:47 +0000 (0:00:00.156) 0:01:07.951 ******* 2026-02-16 03:35:50.537278 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.537283 | orchestrator | 2026-02-16 03:35:50.537288 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-16 03:35:50.537293 | orchestrator | Monday 16 February 2026 03:35:48 +0000 (0:00:00.139) 0:01:08.090 ******* 2026-02-16 03:35:50.537298 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.537303 | orchestrator | 2026-02-16 03:35:50.537309 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-16 03:35:50.537313 | orchestrator | Monday 16 February 2026 03:35:48 +0000 (0:00:00.146) 0:01:08.236 ******* 2026-02-16 03:35:50.537318 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.537323 | orchestrator | 2026-02-16 03:35:50.537332 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-16 03:35:50.537337 | orchestrator | Monday 16 February 2026 03:35:48 +0000 (0:00:00.140) 0:01:08.377 ******* 2026-02-16 03:35:50.537342 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.537347 | orchestrator | 2026-02-16 03:35:50.537353 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-16 03:35:50.537358 | orchestrator | Monday 16 February 2026 03:35:48 +0000 (0:00:00.138) 0:01:08.515 ******* 2026-02-16 03:35:50.537363 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.537368 | orchestrator | 2026-02-16 03:35:50.537373 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-16 03:35:50.537378 | orchestrator | Monday 16 February 2026 03:35:48 +0000 (0:00:00.135) 0:01:08.651 ******* 2026-02-16 03:35:50.537383 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.537388 | orchestrator | 2026-02-16 03:35:50.537393 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-16 03:35:50.537402 | orchestrator | Monday 16 February 2026 03:35:48 +0000 (0:00:00.136) 0:01:08.787 ******* 2026-02-16 03:35:50.537407 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.537412 | orchestrator | 2026-02-16 03:35:50.537417 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-16 03:35:50.537422 | orchestrator | Monday 16 February 2026 03:35:48 +0000 (0:00:00.133) 0:01:08.920 ******* 2026-02-16 03:35:50.537427 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.537432 | orchestrator | 2026-02-16 03:35:50.537437 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-16 03:35:50.537442 | orchestrator | Monday 16 February 2026 03:35:49 +0000 (0:00:00.140) 0:01:09.061 ******* 2026-02-16 03:35:50.537447 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.537452 | orchestrator | 2026-02-16 03:35:50.537457 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-16 03:35:50.537462 | orchestrator | Monday 16 February 2026 03:35:49 +0000 (0:00:00.136) 0:01:09.197 ******* 2026-02-16 03:35:50.537468 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.537473 | orchestrator | 2026-02-16 03:35:50.537478 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-16 03:35:50.537483 | orchestrator | Monday 16 February 2026 03:35:49 +0000 (0:00:00.142) 0:01:09.339 ******* 2026-02-16 03:35:50.537488 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.537503 | orchestrator | 2026-02-16 03:35:50.537509 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-16 03:35:50.537514 | orchestrator | Monday 16 February 2026 03:35:49 +0000 (0:00:00.329) 0:01:09.669 ******* 2026-02-16 03:35:50.537519 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.537524 | orchestrator | 2026-02-16 03:35:50.537529 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-16 03:35:50.537534 | orchestrator | Monday 16 February 2026 03:35:49 +0000 (0:00:00.132) 0:01:09.802 ******* 2026-02-16 03:35:50.537539 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.537552 | orchestrator | 2026-02-16 03:35:50.537574 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-16 03:35:50.537579 | orchestrator | Monday 16 February 2026 03:35:49 +0000 (0:00:00.148) 0:01:09.950 ******* 2026-02-16 03:35:50.537584 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.537589 | orchestrator | 2026-02-16 03:35:50.537594 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-16 03:35:50.537599 | orchestrator | Monday 16 February 2026 03:35:50 +0000 (0:00:00.135) 0:01:10.086 ******* 2026-02-16 03:35:50.537604 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 03:35:50.537609 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 03:35:50.537614 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.537619 | orchestrator | 2026-02-16 03:35:50.537624 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-16 03:35:50.537629 | orchestrator | Monday 16 February 2026 03:35:50 +0000 (0:00:00.182) 0:01:10.268 ******* 2026-02-16 03:35:50.537635 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 03:35:50.537640 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 03:35:50.537645 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:50.537650 | orchestrator | 2026-02-16 03:35:50.537655 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-16 03:35:50.537660 | orchestrator | Monday 16 February 2026 03:35:50 +0000 (0:00:00.146) 0:01:10.415 ******* 2026-02-16 03:35:50.537674 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 03:35:53.506411 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 03:35:53.506506 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:53.506518 | orchestrator | 2026-02-16 03:35:53.506527 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-16 03:35:53.506536 | orchestrator | Monday 16 February 2026 03:35:50 +0000 (0:00:00.161) 0:01:10.576 ******* 2026-02-16 03:35:53.506544 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 03:35:53.506609 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 03:35:53.506619 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:53.506627 | orchestrator | 2026-02-16 03:35:53.506634 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-16 03:35:53.506642 | orchestrator | Monday 16 February 2026 03:35:50 +0000 (0:00:00.157) 0:01:10.734 ******* 2026-02-16 03:35:53.506649 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 03:35:53.506656 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 03:35:53.506664 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:53.506672 | orchestrator | 2026-02-16 03:35:53.506679 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-16 03:35:53.506686 | orchestrator | Monday 16 February 2026 03:35:50 +0000 (0:00:00.167) 0:01:10.902 ******* 2026-02-16 03:35:53.506694 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 03:35:53.506723 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 03:35:53.506730 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:53.506737 | orchestrator | 2026-02-16 03:35:53.506744 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-16 03:35:53.506751 | orchestrator | Monday 16 February 2026 03:35:51 +0000 (0:00:00.152) 0:01:11.054 ******* 2026-02-16 03:35:53.506759 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 03:35:53.506766 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 03:35:53.506773 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:53.506780 | orchestrator | 2026-02-16 03:35:53.506787 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-16 03:35:53.506794 | orchestrator | Monday 16 February 2026 03:35:51 +0000 (0:00:00.145) 0:01:11.200 ******* 2026-02-16 03:35:53.506802 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 03:35:53.506809 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 03:35:53.506816 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:53.506823 | orchestrator | 2026-02-16 03:35:53.506830 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-16 03:35:53.506854 | orchestrator | Monday 16 February 2026 03:35:51 +0000 (0:00:00.145) 0:01:11.345 ******* 2026-02-16 03:35:53.506862 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:35:53.506870 | orchestrator | 2026-02-16 03:35:53.506877 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-16 03:35:53.506884 | orchestrator | Monday 16 February 2026 03:35:52 +0000 (0:00:00.719) 0:01:12.065 ******* 2026-02-16 03:35:53.506891 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:35:53.506899 | orchestrator | 2026-02-16 03:35:53.506906 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-16 03:35:53.506913 | orchestrator | Monday 16 February 2026 03:35:52 +0000 (0:00:00.519) 0:01:12.584 ******* 2026-02-16 03:35:53.506920 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:35:53.506927 | orchestrator | 2026-02-16 03:35:53.506937 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-16 03:35:53.506945 | orchestrator | Monday 16 February 2026 03:35:52 +0000 (0:00:00.148) 0:01:12.733 ******* 2026-02-16 03:35:53.506953 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'vg_name': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'}) 2026-02-16 03:35:53.506962 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'vg_name': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'}) 2026-02-16 03:35:53.506970 | orchestrator | 2026-02-16 03:35:53.506979 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-16 03:35:53.506987 | orchestrator | Monday 16 February 2026 03:35:52 +0000 (0:00:00.172) 0:01:12.906 ******* 2026-02-16 03:35:53.507009 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 03:35:53.507018 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 03:35:53.507027 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:53.507035 | orchestrator | 2026-02-16 03:35:53.507044 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-16 03:35:53.507052 | orchestrator | Monday 16 February 2026 03:35:53 +0000 (0:00:00.155) 0:01:13.061 ******* 2026-02-16 03:35:53.507065 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 03:35:53.507074 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 03:35:53.507082 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:53.507090 | orchestrator | 2026-02-16 03:35:53.507098 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-16 03:35:53.507107 | orchestrator | Monday 16 February 2026 03:35:53 +0000 (0:00:00.165) 0:01:13.226 ******* 2026-02-16 03:35:53.507115 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 03:35:53.507124 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 03:35:53.507132 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:35:53.507140 | orchestrator | 2026-02-16 03:35:53.507148 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-16 03:35:53.507157 | orchestrator | Monday 16 February 2026 03:35:53 +0000 (0:00:00.142) 0:01:13.369 ******* 2026-02-16 03:35:53.507165 | orchestrator | ok: [testbed-node-5] => { 2026-02-16 03:35:53.507174 | orchestrator |  "lvm_report": { 2026-02-16 03:35:53.507182 | orchestrator |  "lv": [ 2026-02-16 03:35:53.507190 | orchestrator |  { 2026-02-16 03:35:53.507199 | orchestrator |  "lv_name": "osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5", 2026-02-16 03:35:53.507213 | orchestrator |  "vg_name": "ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5" 2026-02-16 03:35:53.507222 | orchestrator |  }, 2026-02-16 03:35:53.507230 | orchestrator |  { 2026-02-16 03:35:53.507239 | orchestrator |  "lv_name": "osd-block-f418f421-cc32-53ce-b421-39353fe37c02", 2026-02-16 03:35:53.507247 | orchestrator |  "vg_name": "ceph-f418f421-cc32-53ce-b421-39353fe37c02" 2026-02-16 03:35:53.507255 | orchestrator |  } 2026-02-16 03:35:53.507264 | orchestrator |  ], 2026-02-16 03:35:53.507272 | orchestrator |  "pv": [ 2026-02-16 03:35:53.507280 | orchestrator |  { 2026-02-16 03:35:53.507289 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-16 03:35:53.507297 | orchestrator |  "vg_name": "ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5" 2026-02-16 03:35:53.507305 | orchestrator |  }, 2026-02-16 03:35:53.507312 | orchestrator |  { 2026-02-16 03:35:53.507319 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-16 03:35:53.507326 | orchestrator |  "vg_name": "ceph-f418f421-cc32-53ce-b421-39353fe37c02" 2026-02-16 03:35:53.507334 | orchestrator |  } 2026-02-16 03:35:53.507341 | orchestrator |  ] 2026-02-16 03:35:53.507348 | orchestrator |  } 2026-02-16 03:35:53.507355 | orchestrator | } 2026-02-16 03:35:53.507363 | orchestrator | 2026-02-16 03:35:53.507370 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:35:53.507377 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-16 03:35:53.507385 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-16 03:35:53.507392 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-16 03:35:53.507400 | orchestrator | 2026-02-16 03:35:53.507407 | orchestrator | 2026-02-16 03:35:53.507414 | orchestrator | 2026-02-16 03:35:53.507421 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:35:53.507428 | orchestrator | Monday 16 February 2026 03:35:53 +0000 (0:00:00.150) 0:01:13.520 ******* 2026-02-16 03:35:53.507435 | orchestrator | =============================================================================== 2026-02-16 03:35:53.507443 | orchestrator | Create block VGs -------------------------------------------------------- 5.77s 2026-02-16 03:35:53.507450 | orchestrator | Create block LVs -------------------------------------------------------- 4.22s 2026-02-16 03:35:53.507457 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.77s 2026-02-16 03:35:53.507464 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.71s 2026-02-16 03:35:53.507471 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.59s 2026-02-16 03:35:53.507478 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.57s 2026-02-16 03:35:53.507485 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.55s 2026-02-16 03:35:53.507492 | orchestrator | Add known links to the list of available block devices ------------------ 1.34s 2026-02-16 03:35:53.507504 | orchestrator | Add known partitions to the list of available block devices ------------- 1.32s 2026-02-16 03:35:53.860388 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.17s 2026-02-16 03:35:53.860485 | orchestrator | Add known links to the list of available block devices ------------------ 0.87s 2026-02-16 03:35:53.860498 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2026-02-16 03:35:53.860509 | orchestrator | Calculate VG sizes (with buffer) ---------------------------------------- 0.75s 2026-02-16 03:35:53.860519 | orchestrator | Print LVM report data --------------------------------------------------- 0.74s 2026-02-16 03:35:53.860554 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.73s 2026-02-16 03:35:53.860614 | orchestrator | Get initial list of available block devices ----------------------------- 0.71s 2026-02-16 03:35:53.860625 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.68s 2026-02-16 03:35:53.860635 | orchestrator | Print 'Create block LVs' ------------------------------------------------ 0.68s 2026-02-16 03:35:53.860644 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-02-16 03:35:53.860654 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-02-16 03:36:06.203340 | orchestrator | 2026-02-16 03:36:06 | INFO  | Task 0e04af45-3311-406d-b744-0386170b4289 (facts) was prepared for execution. 2026-02-16 03:36:06.203458 | orchestrator | 2026-02-16 03:36:06 | INFO  | It takes a moment until task 0e04af45-3311-406d-b744-0386170b4289 (facts) has been started and output is visible here. 2026-02-16 03:36:19.239557 | orchestrator | 2026-02-16 03:36:19.239772 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-16 03:36:19.239794 | orchestrator | 2026-02-16 03:36:19.239807 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-16 03:36:19.239819 | orchestrator | Monday 16 February 2026 03:36:10 +0000 (0:00:00.280) 0:00:00.280 ******* 2026-02-16 03:36:19.239830 | orchestrator | ok: [testbed-manager] 2026-02-16 03:36:19.239842 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:36:19.239853 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:36:19.239865 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:36:19.239875 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:36:19.239886 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:36:19.239897 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:36:19.239908 | orchestrator | 2026-02-16 03:36:19.239919 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-16 03:36:19.239931 | orchestrator | Monday 16 February 2026 03:36:11 +0000 (0:00:01.123) 0:00:01.403 ******* 2026-02-16 03:36:19.239942 | orchestrator | skipping: [testbed-manager] 2026-02-16 03:36:19.239953 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:36:19.239964 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:36:19.239975 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:36:19.239986 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:19.239997 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:36:19.240008 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:36:19.240019 | orchestrator | 2026-02-16 03:36:19.240030 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-16 03:36:19.240041 | orchestrator | 2026-02-16 03:36:19.240052 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-16 03:36:19.240065 | orchestrator | Monday 16 February 2026 03:36:12 +0000 (0:00:01.281) 0:00:02.685 ******* 2026-02-16 03:36:19.240078 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:36:19.240090 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:36:19.240102 | orchestrator | ok: [testbed-manager] 2026-02-16 03:36:19.240115 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:36:19.240128 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:36:19.240140 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:36:19.240152 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:36:19.240163 | orchestrator | 2026-02-16 03:36:19.240176 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-16 03:36:19.240188 | orchestrator | 2026-02-16 03:36:19.240201 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-16 03:36:19.240213 | orchestrator | Monday 16 February 2026 03:36:18 +0000 (0:00:05.505) 0:00:08.191 ******* 2026-02-16 03:36:19.240226 | orchestrator | skipping: [testbed-manager] 2026-02-16 03:36:19.240238 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:36:19.240251 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:36:19.240264 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:36:19.240276 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:19.240314 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:36:19.240327 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:36:19.240339 | orchestrator | 2026-02-16 03:36:19.240352 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:36:19.240365 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:36:19.240378 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:36:19.240390 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:36:19.240403 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:36:19.240416 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:36:19.240427 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:36:19.240438 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:36:19.240449 | orchestrator | 2026-02-16 03:36:19.240460 | orchestrator | 2026-02-16 03:36:19.240471 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:36:19.240483 | orchestrator | Monday 16 February 2026 03:36:18 +0000 (0:00:00.547) 0:00:08.738 ******* 2026-02-16 03:36:19.240493 | orchestrator | =============================================================================== 2026-02-16 03:36:19.240504 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.51s 2026-02-16 03:36:19.240515 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.28s 2026-02-16 03:36:19.240540 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.12s 2026-02-16 03:36:19.240552 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-02-16 03:36:21.576355 | orchestrator | 2026-02-16 03:36:21 | INFO  | Task 49013325-952c-42da-947f-64a0e2767c79 (ceph) was prepared for execution. 2026-02-16 03:36:21.576458 | orchestrator | 2026-02-16 03:36:21 | INFO  | It takes a moment until task 49013325-952c-42da-947f-64a0e2767c79 (ceph) has been started and output is visible here. 2026-02-16 03:36:38.560759 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-16 03:36:38.560872 | orchestrator | 2.16.14 2026-02-16 03:36:38.560889 | orchestrator | 2026-02-16 03:36:38.560902 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-16 03:36:38.560915 | orchestrator | 2026-02-16 03:36:38.560927 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-16 03:36:38.560938 | orchestrator | Monday 16 February 2026 03:36:26 +0000 (0:00:00.581) 0:00:00.581 ******* 2026-02-16 03:36:38.560950 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:36:38.560962 | orchestrator | 2026-02-16 03:36:38.560973 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-16 03:36:38.560984 | orchestrator | Monday 16 February 2026 03:36:27 +0000 (0:00:00.958) 0:00:01.540 ******* 2026-02-16 03:36:38.560995 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:36:38.561006 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:36:38.561017 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:36:38.561027 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:36:38.561038 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:36:38.561049 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:36:38.561084 | orchestrator | 2026-02-16 03:36:38.561095 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-16 03:36:38.561107 | orchestrator | Monday 16 February 2026 03:36:28 +0000 (0:00:01.226) 0:00:02.766 ******* 2026-02-16 03:36:38.561118 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:36:38.561129 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:36:38.561140 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:36:38.561151 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:36:38.561162 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:36:38.561172 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:36:38.561183 | orchestrator | 2026-02-16 03:36:38.561193 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-16 03:36:38.561204 | orchestrator | Monday 16 February 2026 03:36:29 +0000 (0:00:00.634) 0:00:03.401 ******* 2026-02-16 03:36:38.561217 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:36:38.561229 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:36:38.561241 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:36:38.561254 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:36:38.561266 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:36:38.561278 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:36:38.561291 | orchestrator | 2026-02-16 03:36:38.561303 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-16 03:36:38.561315 | orchestrator | Monday 16 February 2026 03:36:29 +0000 (0:00:00.839) 0:00:04.241 ******* 2026-02-16 03:36:38.561328 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:36:38.561340 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:36:38.561352 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:36:38.561364 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:36:38.561376 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:36:38.561389 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:36:38.561401 | orchestrator | 2026-02-16 03:36:38.561414 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-16 03:36:38.561426 | orchestrator | Monday 16 February 2026 03:36:30 +0000 (0:00:00.714) 0:00:04.955 ******* 2026-02-16 03:36:38.561439 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:36:38.561450 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:36:38.561462 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:36:38.561475 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:36:38.561487 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:36:38.561498 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:36:38.561511 | orchestrator | 2026-02-16 03:36:38.561523 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-16 03:36:38.561535 | orchestrator | Monday 16 February 2026 03:36:31 +0000 (0:00:00.610) 0:00:05.566 ******* 2026-02-16 03:36:38.561547 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:36:38.561560 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:36:38.561572 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:36:38.561583 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:36:38.561594 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:36:38.561604 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:36:38.561635 | orchestrator | 2026-02-16 03:36:38.561646 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-16 03:36:38.561657 | orchestrator | Monday 16 February 2026 03:36:32 +0000 (0:00:00.790) 0:00:06.357 ******* 2026-02-16 03:36:38.561668 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:38.561680 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:36:38.561690 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:36:38.561701 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:36:38.561716 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:36:38.561734 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:36:38.561751 | orchestrator | 2026-02-16 03:36:38.561771 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-16 03:36:38.561790 | orchestrator | Monday 16 February 2026 03:36:32 +0000 (0:00:00.633) 0:00:06.990 ******* 2026-02-16 03:36:38.561807 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:36:38.561826 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:36:38.561847 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:36:38.561858 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:36:38.561868 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:36:38.561879 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:36:38.561890 | orchestrator | 2026-02-16 03:36:38.561901 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-16 03:36:38.561912 | orchestrator | Monday 16 February 2026 03:36:33 +0000 (0:00:00.777) 0:00:07.768 ******* 2026-02-16 03:36:38.561937 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-16 03:36:38.561948 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 03:36:38.561959 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 03:36:38.561970 | orchestrator | 2026-02-16 03:36:38.561981 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-16 03:36:38.561992 | orchestrator | Monday 16 February 2026 03:36:34 +0000 (0:00:00.649) 0:00:08.417 ******* 2026-02-16 03:36:38.562002 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:36:38.562080 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:36:38.562093 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:36:38.562125 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:36:38.562136 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:36:38.562147 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:36:38.562158 | orchestrator | 2026-02-16 03:36:38.562169 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-16 03:36:38.562180 | orchestrator | Monday 16 February 2026 03:36:34 +0000 (0:00:00.788) 0:00:09.206 ******* 2026-02-16 03:36:38.562191 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-16 03:36:38.562202 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 03:36:38.562213 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 03:36:38.562232 | orchestrator | 2026-02-16 03:36:38.562244 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-16 03:36:38.562255 | orchestrator | Monday 16 February 2026 03:36:37 +0000 (0:00:02.299) 0:00:11.505 ******* 2026-02-16 03:36:38.562266 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-16 03:36:38.562277 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-16 03:36:38.562288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-16 03:36:38.562299 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:38.562310 | orchestrator | 2026-02-16 03:36:38.562321 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-16 03:36:38.562332 | orchestrator | Monday 16 February 2026 03:36:37 +0000 (0:00:00.426) 0:00:11.931 ******* 2026-02-16 03:36:38.562345 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-16 03:36:38.562359 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-16 03:36:38.562370 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-16 03:36:38.562381 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:38.562393 | orchestrator | 2026-02-16 03:36:38.562403 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-16 03:36:38.562414 | orchestrator | Monday 16 February 2026 03:36:38 +0000 (0:00:00.598) 0:00:12.530 ******* 2026-02-16 03:36:38.562428 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:38.562453 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:38.562465 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:38.562476 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:38.562487 | orchestrator | 2026-02-16 03:36:38.562498 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-16 03:36:38.562509 | orchestrator | Monday 16 February 2026 03:36:38 +0000 (0:00:00.163) 0:00:12.694 ******* 2026-02-16 03:36:38.562537 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-16 03:36:35.722739', 'end': '2026-02-16 03:36:35.769427', 'delta': '0:00:00.046688', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-16 03:36:47.899580 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-16 03:36:36.290878', 'end': '2026-02-16 03:36:36.327740', 'delta': '0:00:00.036862', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-16 03:36:47.899746 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-16 03:36:36.818977', 'end': '2026-02-16 03:36:36.864778', 'delta': '0:00:00.045801', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-16 03:36:47.899764 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:47.899779 | orchestrator | 2026-02-16 03:36:47.899792 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-16 03:36:47.899831 | orchestrator | Monday 16 February 2026 03:36:38 +0000 (0:00:00.173) 0:00:12.867 ******* 2026-02-16 03:36:47.899843 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:36:47.899855 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:36:47.899865 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:36:47.899876 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:36:47.899887 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:36:47.899898 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:36:47.899908 | orchestrator | 2026-02-16 03:36:47.899919 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-16 03:36:47.899930 | orchestrator | Monday 16 February 2026 03:36:39 +0000 (0:00:00.715) 0:00:13.582 ******* 2026-02-16 03:36:47.899942 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-16 03:36:47.899952 | orchestrator | 2026-02-16 03:36:47.899963 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-16 03:36:47.899974 | orchestrator | Monday 16 February 2026 03:36:40 +0000 (0:00:00.838) 0:00:14.421 ******* 2026-02-16 03:36:47.899985 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:47.899996 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:36:47.900006 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:36:47.900017 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:36:47.900028 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:36:47.900038 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:36:47.900049 | orchestrator | 2026-02-16 03:36:47.900059 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-16 03:36:47.900071 | orchestrator | Monday 16 February 2026 03:36:40 +0000 (0:00:00.779) 0:00:15.200 ******* 2026-02-16 03:36:47.900081 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:47.900092 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:36:47.900104 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:36:47.900118 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:36:47.900130 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:36:47.900143 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:36:47.900156 | orchestrator | 2026-02-16 03:36:47.900175 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-16 03:36:47.900193 | orchestrator | Monday 16 February 2026 03:36:41 +0000 (0:00:01.079) 0:00:16.280 ******* 2026-02-16 03:36:47.900211 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:47.900228 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:36:47.900241 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:36:47.900253 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:36:47.900266 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:36:47.900279 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:36:47.900291 | orchestrator | 2026-02-16 03:36:47.900303 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-16 03:36:47.900316 | orchestrator | Monday 16 February 2026 03:36:42 +0000 (0:00:00.591) 0:00:16.872 ******* 2026-02-16 03:36:47.900344 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:47.900368 | orchestrator | 2026-02-16 03:36:47.900396 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-16 03:36:47.900409 | orchestrator | Monday 16 February 2026 03:36:42 +0000 (0:00:00.129) 0:00:17.001 ******* 2026-02-16 03:36:47.900421 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:47.900433 | orchestrator | 2026-02-16 03:36:47.900445 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-16 03:36:47.900457 | orchestrator | Monday 16 February 2026 03:36:42 +0000 (0:00:00.211) 0:00:17.212 ******* 2026-02-16 03:36:47.900468 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:47.900479 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:36:47.900489 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:36:47.900500 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:36:47.900510 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:36:47.900521 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:36:47.900540 | orchestrator | 2026-02-16 03:36:47.900570 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-16 03:36:47.900583 | orchestrator | Monday 16 February 2026 03:36:43 +0000 (0:00:00.738) 0:00:17.951 ******* 2026-02-16 03:36:47.900594 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:47.900604 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:36:47.900615 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:36:47.900656 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:36:47.900676 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:36:47.900694 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:36:47.900711 | orchestrator | 2026-02-16 03:36:47.900729 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-16 03:36:47.900740 | orchestrator | Monday 16 February 2026 03:36:44 +0000 (0:00:00.585) 0:00:18.537 ******* 2026-02-16 03:36:47.900751 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:47.900762 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:36:47.900772 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:36:47.900783 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:36:47.900793 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:36:47.900804 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:36:47.900814 | orchestrator | 2026-02-16 03:36:47.900825 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-16 03:36:47.900836 | orchestrator | Monday 16 February 2026 03:36:44 +0000 (0:00:00.778) 0:00:19.315 ******* 2026-02-16 03:36:47.900847 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:47.900857 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:36:47.900868 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:36:47.900878 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:36:47.900889 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:36:47.900899 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:36:47.900910 | orchestrator | 2026-02-16 03:36:47.900920 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-16 03:36:47.900931 | orchestrator | Monday 16 February 2026 03:36:45 +0000 (0:00:00.619) 0:00:19.935 ******* 2026-02-16 03:36:47.900941 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:47.900952 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:36:47.900963 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:36:47.900973 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:36:47.900984 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:36:47.900994 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:36:47.901005 | orchestrator | 2026-02-16 03:36:47.901015 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-16 03:36:47.901026 | orchestrator | Monday 16 February 2026 03:36:46 +0000 (0:00:00.745) 0:00:20.680 ******* 2026-02-16 03:36:47.901036 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:47.901047 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:36:47.901057 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:36:47.901067 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:36:47.901078 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:36:47.901088 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:36:47.901099 | orchestrator | 2026-02-16 03:36:47.901110 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-16 03:36:47.901122 | orchestrator | Monday 16 February 2026 03:36:46 +0000 (0:00:00.568) 0:00:21.248 ******* 2026-02-16 03:36:47.901132 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:47.901143 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:36:47.901153 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:36:47.901164 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:36:47.901174 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:36:47.901185 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:36:47.901195 | orchestrator | 2026-02-16 03:36:47.901206 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-16 03:36:47.901216 | orchestrator | Monday 16 February 2026 03:36:47 +0000 (0:00:00.828) 0:00:22.077 ******* 2026-02-16 03:36:47.901237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f9a42f5--b575--5e11--9555--a5550e2fae1e-osd--block--2f9a42f5--b575--5e11--9555--a5550e2fae1e', 'dm-uuid-LVM-F4bqzAKmgcv4nzZjVJIDDLRdBkjdiY7Ac3eDMWCQjEFL46zd8qXZ7hWvk7L0nQAD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:47.901257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50d7a967--e09e--512a--aa83--aa9bbdf9ab74-osd--block--50d7a967--e09e--512a--aa83--aa9bbdf9ab74', 'dm-uuid-LVM-2dhVtclKCjfsjMcDe2D03F1qrxXtffQzYuMeigkCrxOY0hLAH1gOwaoo3bAqwsvb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:47.901277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.017844 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.017940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.017952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.017963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.017973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.018004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.018066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.018117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part1', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part14', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part15', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part16', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.018136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2f9a42f5--b575--5e11--9555--a5550e2fae1e-osd--block--2f9a42f5--b575--5e11--9555--a5550e2fae1e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1ITxS0-SFz0-FdlF-VzSF-Uv8m-y10A-m0caaJ', 'scsi-0QEMU_QEMU_HARDDISK_0693774e-e893-4e7b-949f-071f2326db51', 'scsi-SQEMU_QEMU_HARDDISK_0693774e-e893-4e7b-949f-071f2326db51'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.018149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--50d7a967--e09e--512a--aa83--aa9bbdf9ab74-osd--block--50d7a967--e09e--512a--aa83--aa9bbdf9ab74'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UNvti2-beMu-mtun-nkoB-anD7-j3vD-BO56Wb', 'scsi-0QEMU_QEMU_HARDDISK_51f5f49d-415a-48de-982e-531dff143e5e', 'scsi-SQEMU_QEMU_HARDDISK_51f5f49d-415a-48de-982e-531dff143e5e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.018168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_843bc551-f5ad-4319-82ad-d411f9295fd2', 'scsi-SQEMU_QEMU_HARDDISK_843bc551-f5ad-4319-82ad-d411f9295fd2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.018184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d-osd--block--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d', 'dm-uuid-LVM-sWHkNGoua6AD2gtW0aHfBT1ggS3B4VVdqYYWm2N1bkS9UT0Dip02AjKcu40awaVv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.018204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-16-02-25-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.189594 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3ec6a818--dc71--5cb4--ac47--83f209d09bca-osd--block--3ec6a818--dc71--5cb4--ac47--83f209d09bca', 'dm-uuid-LVM-IKNT1aRSRRXmVnhjGHBWtObOyhGZoCrKxknn5549qE5Iv1X6exAA2Hq2RDcxdb2r'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.189744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.189763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.189775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.189811 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.189823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.189834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.189846 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:48.189874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.189886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.189921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part1', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part14', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part15', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part16', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.189946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d-osd--block--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-W4T77R-WX0u-2wiK-0VwS-pHXw-eigq-78SyVp', 'scsi-0QEMU_QEMU_HARDDISK_769208b9-3be0-45fa-bf10-39ffe30cf829', 'scsi-SQEMU_QEMU_HARDDISK_769208b9-3be0-45fa-bf10-39ffe30cf829'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.189964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3ec6a818--dc71--5cb4--ac47--83f209d09bca-osd--block--3ec6a818--dc71--5cb4--ac47--83f209d09bca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ezeU5X-kiVi-Bwdm-EJU8-vTMX-Ty8v-7odRXz', 'scsi-0QEMU_QEMU_HARDDISK_0857a7ec-98e0-4b3a-95cc-40567a4f4a8e', 'scsi-SQEMU_QEMU_HARDDISK_0857a7ec-98e0-4b3a-95cc-40567a4f4a8e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.189986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57ea9400-2602-4802-b9b7-802a488f4705', 'scsi-SQEMU_QEMU_HARDDISK_57ea9400-2602-4802-b9b7-802a488f4705'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.389534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-16-02-25-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.389690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--10a0662d--59e9--5a43--af5c--1b6d671b7fa5-osd--block--10a0662d--59e9--5a43--af5c--1b6d671b7fa5', 'dm-uuid-LVM-SWv31bXFKxTO3vyaMihj1WLbgzWvzkgjdSLmrZCRVKIRBOjrNick0KroaJNYuYcA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.389733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f418f421--cc32--53ce--b421--39353fe37c02-osd--block--f418f421--cc32--53ce--b421--39353fe37c02', 'dm-uuid-LVM-fuzYkTDOD1mzGPTtEVy3HIfkbUT8vrouEUngu6j9gDpOiJ09icmXLIesmhVGIdAG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.389748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.389763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.389775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.389801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.389813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.389844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.389856 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.389868 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:36:48.389881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.389909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part1', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part14', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part15', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part16', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.389926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--10a0662d--59e9--5a43--af5c--1b6d671b7fa5-osd--block--10a0662d--59e9--5a43--af5c--1b6d671b7fa5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-z25UVR-mt7s-2TOu-f4Na-2m38-OcPQ-rSbkPq', 'scsi-0QEMU_QEMU_HARDDISK_864a7dfe-a330-4dca-9b66-49ca9e8841e5', 'scsi-SQEMU_QEMU_HARDDISK_864a7dfe-a330-4dca-9b66-49ca9e8841e5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.389947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f418f421--cc32--53ce--b421--39353fe37c02-osd--block--f418f421--cc32--53ce--b421--39353fe37c02'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Qrttlw-98AS-fQrI-yUr1-wyrI-2oj6-dafTom', 'scsi-0QEMU_QEMU_HARDDISK_560fea90-96fc-4e98-a264-4fc86723b569', 'scsi-SQEMU_QEMU_HARDDISK_560fea90-96fc-4e98-a264-4fc86723b569'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.549602 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22f5929b-f2f1-4a02-b80c-ace7dc1afd6d', 'scsi-SQEMU_QEMU_HARDDISK_22f5929b-f2f1-4a02-b80c-ace7dc1afd6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.549785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-16-02-25-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.549804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.549819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.549848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.549861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.549872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.549884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.549915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.549947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.549968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part1', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part14', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part15', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part16', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.549984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-16-02-25-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.549996 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:36:48.550010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.550087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.550126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.766791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.766895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.766910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.766922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.766949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.766984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.767030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-16-02-25-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.767045 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:36:48.767058 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:36:48.767069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.767081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.767092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.767104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.767115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.767134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.767233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.767264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:36:48.968050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part1', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part14', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part15', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part16', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.968148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-16-02-25-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:36:48.968181 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:36:48.968194 | orchestrator | 2026-02-16 03:36:48.968205 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-16 03:36:48.968215 | orchestrator | Monday 16 February 2026 03:36:48 +0000 (0:00:00.996) 0:00:23.074 ******* 2026-02-16 03:36:48.968227 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f9a42f5--b575--5e11--9555--a5550e2fae1e-osd--block--2f9a42f5--b575--5e11--9555--a5550e2fae1e', 'dm-uuid-LVM-F4bqzAKmgcv4nzZjVJIDDLRdBkjdiY7Ac3eDMWCQjEFL46zd8qXZ7hWvk7L0nQAD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:48.968254 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50d7a967--e09e--512a--aa83--aa9bbdf9ab74-osd--block--50d7a967--e09e--512a--aa83--aa9bbdf9ab74', 'dm-uuid-LVM-2dhVtclKCjfsjMcDe2D03F1qrxXtffQzYuMeigkCrxOY0hLAH1gOwaoo3bAqwsvb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:48.968264 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:48.968275 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:48.968290 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:48.968299 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:48.968324 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:48.968342 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:48.968368 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.269453 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d-osd--block--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d', 'dm-uuid-LVM-sWHkNGoua6AD2gtW0aHfBT1ggS3B4VVdqYYWm2N1bkS9UT0Dip02AjKcu40awaVv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.269556 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.269589 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3ec6a818--dc71--5cb4--ac47--83f209d09bca-osd--block--3ec6a818--dc71--5cb4--ac47--83f209d09bca', 'dm-uuid-LVM-IKNT1aRSRRXmVnhjGHBWtObOyhGZoCrKxknn5549qE5Iv1X6exAA2Hq2RDcxdb2r'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.269710 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part1', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part14', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part15', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part16', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.269729 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.269747 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2f9a42f5--b575--5e11--9555--a5550e2fae1e-osd--block--2f9a42f5--b575--5e11--9555--a5550e2fae1e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1ITxS0-SFz0-FdlF-VzSF-Uv8m-y10A-m0caaJ', 'scsi-0QEMU_QEMU_HARDDISK_0693774e-e893-4e7b-949f-071f2326db51', 'scsi-SQEMU_QEMU_HARDDISK_0693774e-e893-4e7b-949f-071f2326db51'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.269768 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.269797 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--50d7a967--e09e--512a--aa83--aa9bbdf9ab74-osd--block--50d7a967--e09e--512a--aa83--aa9bbdf9ab74'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UNvti2-beMu-mtun-nkoB-anD7-j3vD-BO56Wb', 'scsi-0QEMU_QEMU_HARDDISK_51f5f49d-415a-48de-982e-531dff143e5e', 'scsi-SQEMU_QEMU_HARDDISK_51f5f49d-415a-48de-982e-531dff143e5e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.269827 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_843bc551-f5ad-4319-82ad-d411f9295fd2', 'scsi-SQEMU_QEMU_HARDDISK_843bc551-f5ad-4319-82ad-d411f9295fd2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.279151 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.279209 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-16-02-25-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.279237 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.279251 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.279262 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.279273 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.279297 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.279317 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part1', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part14', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part15', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part16', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.279339 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d-osd--block--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-W4T77R-WX0u-2wiK-0VwS-pHXw-eigq-78SyVp', 'scsi-0QEMU_QEMU_HARDDISK_769208b9-3be0-45fa-bf10-39ffe30cf829', 'scsi-SQEMU_QEMU_HARDDISK_769208b9-3be0-45fa-bf10-39ffe30cf829'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.279358 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3ec6a818--dc71--5cb4--ac47--83f209d09bca-osd--block--3ec6a818--dc71--5cb4--ac47--83f209d09bca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ezeU5X-kiVi-Bwdm-EJU8-vTMX-Ty8v-7odRXz', 'scsi-0QEMU_QEMU_HARDDISK_0857a7ec-98e0-4b3a-95cc-40567a4f4a8e', 'scsi-SQEMU_QEMU_HARDDISK_0857a7ec-98e0-4b3a-95cc-40567a4f4a8e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.494521 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57ea9400-2602-4802-b9b7-802a488f4705', 'scsi-SQEMU_QEMU_HARDDISK_57ea9400-2602-4802-b9b7-802a488f4705'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.494708 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-16-02-25-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.494740 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:36:49.494797 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--10a0662d--59e9--5a43--af5c--1b6d671b7fa5-osd--block--10a0662d--59e9--5a43--af5c--1b6d671b7fa5', 'dm-uuid-LVM-SWv31bXFKxTO3vyaMihj1WLbgzWvzkgjdSLmrZCRVKIRBOjrNick0KroaJNYuYcA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.494820 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f418f421--cc32--53ce--b421--39353fe37c02-osd--block--f418f421--cc32--53ce--b421--39353fe37c02', 'dm-uuid-LVM-fuzYkTDOD1mzGPTtEVy3HIfkbUT8vrouEUngu6j9gDpOiJ09icmXLIesmhVGIdAG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.494840 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.494885 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.494927 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.494987 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.494999 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.495011 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:36:49.495024 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.495038 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.495050 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.495088 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part1', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part14', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part15', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part16', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.610178 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.610272 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--10a0662d--59e9--5a43--af5c--1b6d671b7fa5-osd--block--10a0662d--59e9--5a43--af5c--1b6d671b7fa5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-z25UVR-mt7s-2TOu-f4Na-2m38-OcPQ-rSbkPq', 'scsi-0QEMU_QEMU_HARDDISK_864a7dfe-a330-4dca-9b66-49ca9e8841e5', 'scsi-SQEMU_QEMU_HARDDISK_864a7dfe-a330-4dca-9b66-49ca9e8841e5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.610299 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.610363 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.610381 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f418f421--cc32--53ce--b421--39353fe37c02-osd--block--f418f421--cc32--53ce--b421--39353fe37c02'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Qrttlw-98AS-fQrI-yUr1-wyrI-2oj6-dafTom', 'scsi-0QEMU_QEMU_HARDDISK_560fea90-96fc-4e98-a264-4fc86723b569', 'scsi-SQEMU_QEMU_HARDDISK_560fea90-96fc-4e98-a264-4fc86723b569'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.610415 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.610431 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.610447 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22f5929b-f2f1-4a02-b80c-ace7dc1afd6d', 'scsi-SQEMU_QEMU_HARDDISK_22f5929b-f2f1-4a02-b80c-ace7dc1afd6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.610472 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.610494 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-16-02-25-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.610510 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.610536 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.764369 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part1', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part14', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part15', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part16', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.764505 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-16-02-25-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.764521 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:36:49.764530 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.764552 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.764562 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.764571 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.764583 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.764594 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.764601 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.764607 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.764667 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.988439 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-16-02-25-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.988535 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:36:49.988550 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:36:49.988561 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.988573 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.988583 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.988593 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.988704 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.988742 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.988755 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.988765 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.988779 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part1', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part14', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part15', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part16', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:36:49.988812 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-16-02-25-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:37:00.869351 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:37:00.869468 | orchestrator | 2026-02-16 03:37:00.869490 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-16 03:37:00.869506 | orchestrator | Monday 16 February 2026 03:36:49 +0000 (0:00:01.218) 0:00:24.292 ******* 2026-02-16 03:37:00.869521 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:37:00.869535 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:37:00.869549 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:37:00.869563 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:37:00.869575 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:37:00.869589 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:37:00.869602 | orchestrator | 2026-02-16 03:37:00.869616 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-16 03:37:00.869630 | orchestrator | Monday 16 February 2026 03:36:50 +0000 (0:00:00.891) 0:00:25.184 ******* 2026-02-16 03:37:00.869692 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:37:00.869705 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:37:00.869718 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:37:00.869731 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:37:00.869744 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:37:00.869757 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:37:00.869771 | orchestrator | 2026-02-16 03:37:00.869780 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-16 03:37:00.869788 | orchestrator | Monday 16 February 2026 03:36:51 +0000 (0:00:00.799) 0:00:25.984 ******* 2026-02-16 03:37:00.869797 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:37:00.869805 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:37:00.869813 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:37:00.869821 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:37:00.869829 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:37:00.869837 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:37:00.869868 | orchestrator | 2026-02-16 03:37:00.869877 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-16 03:37:00.869886 | orchestrator | Monday 16 February 2026 03:36:52 +0000 (0:00:00.581) 0:00:26.566 ******* 2026-02-16 03:37:00.869895 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:37:00.869904 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:37:00.869913 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:37:00.869922 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:37:00.869930 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:37:00.869939 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:37:00.869947 | orchestrator | 2026-02-16 03:37:00.869956 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-16 03:37:00.869965 | orchestrator | Monday 16 February 2026 03:36:53 +0000 (0:00:00.793) 0:00:27.359 ******* 2026-02-16 03:37:00.869973 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:37:00.869982 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:37:00.869990 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:37:00.869999 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:37:00.870007 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:37:00.870065 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:37:00.870075 | orchestrator | 2026-02-16 03:37:00.870083 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-16 03:37:00.870092 | orchestrator | Monday 16 February 2026 03:36:53 +0000 (0:00:00.610) 0:00:27.970 ******* 2026-02-16 03:37:00.870101 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:37:00.870110 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:37:00.870118 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:37:00.870127 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:37:00.870136 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:37:00.870144 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:37:00.870153 | orchestrator | 2026-02-16 03:37:00.870161 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-16 03:37:00.870170 | orchestrator | Monday 16 February 2026 03:36:54 +0000 (0:00:00.795) 0:00:28.766 ******* 2026-02-16 03:37:00.870179 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-16 03:37:00.870188 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-16 03:37:00.870197 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-16 03:37:00.870205 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-16 03:37:00.870214 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-16 03:37:00.870222 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-16 03:37:00.870231 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 03:37:00.870239 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-16 03:37:00.870248 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-16 03:37:00.870256 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-16 03:37:00.870265 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-16 03:37:00.870273 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-16 03:37:00.870282 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-16 03:37:00.870290 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-16 03:37:00.870298 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-16 03:37:00.870307 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-16 03:37:00.870315 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-16 03:37:00.870324 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-16 03:37:00.870332 | orchestrator | 2026-02-16 03:37:00.870341 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-16 03:37:00.870350 | orchestrator | Monday 16 February 2026 03:36:55 +0000 (0:00:01.466) 0:00:30.232 ******* 2026-02-16 03:37:00.870358 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-16 03:37:00.870387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-16 03:37:00.870397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-16 03:37:00.870405 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:37:00.870414 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-16 03:37:00.870422 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-16 03:37:00.870431 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-16 03:37:00.870458 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:37:00.870468 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-16 03:37:00.870476 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-16 03:37:00.870485 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-16 03:37:00.870493 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:37:00.870502 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-16 03:37:00.870510 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-16 03:37:00.870519 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-16 03:37:00.870527 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:37:00.870536 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-16 03:37:00.870545 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-16 03:37:00.870553 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-16 03:37:00.870562 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:37:00.870570 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-16 03:37:00.870579 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-16 03:37:00.870588 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-16 03:37:00.870596 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:37:00.870605 | orchestrator | 2026-02-16 03:37:00.870614 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-16 03:37:00.870622 | orchestrator | Monday 16 February 2026 03:36:56 +0000 (0:00:00.879) 0:00:31.112 ******* 2026-02-16 03:37:00.870631 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:37:00.870669 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:37:00.870680 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:37:00.870690 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:37:00.870699 | orchestrator | 2026-02-16 03:37:00.870708 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-16 03:37:00.870718 | orchestrator | Monday 16 February 2026 03:36:57 +0000 (0:00:00.966) 0:00:32.079 ******* 2026-02-16 03:37:00.870727 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:37:00.870736 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:37:00.870745 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:37:00.870753 | orchestrator | 2026-02-16 03:37:00.870762 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-16 03:37:00.870771 | orchestrator | Monday 16 February 2026 03:36:58 +0000 (0:00:00.338) 0:00:32.417 ******* 2026-02-16 03:37:00.870780 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:37:00.870788 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:37:00.870797 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:37:00.870805 | orchestrator | 2026-02-16 03:37:00.870814 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-16 03:37:00.870823 | orchestrator | Monday 16 February 2026 03:36:58 +0000 (0:00:00.328) 0:00:32.745 ******* 2026-02-16 03:37:00.870832 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:37:00.870840 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:37:00.870849 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:37:00.870897 | orchestrator | 2026-02-16 03:37:00.870908 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-16 03:37:00.870924 | orchestrator | Monday 16 February 2026 03:36:58 +0000 (0:00:00.509) 0:00:33.255 ******* 2026-02-16 03:37:00.870933 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:37:00.870942 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:37:00.870950 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:37:00.870959 | orchestrator | 2026-02-16 03:37:00.870971 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-16 03:37:00.870986 | orchestrator | Monday 16 February 2026 03:36:59 +0000 (0:00:00.452) 0:00:33.708 ******* 2026-02-16 03:37:00.871009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 03:37:00.871025 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 03:37:00.871040 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 03:37:00.871054 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:37:00.871068 | orchestrator | 2026-02-16 03:37:00.871082 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-16 03:37:00.871099 | orchestrator | Monday 16 February 2026 03:36:59 +0000 (0:00:00.369) 0:00:34.077 ******* 2026-02-16 03:37:00.871114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 03:37:00.871129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 03:37:00.871146 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 03:37:00.871161 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:37:00.871176 | orchestrator | 2026-02-16 03:37:00.871189 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-16 03:37:00.871198 | orchestrator | Monday 16 February 2026 03:37:00 +0000 (0:00:00.382) 0:00:34.460 ******* 2026-02-16 03:37:00.871207 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 03:37:00.871215 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 03:37:00.871224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 03:37:00.871233 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:37:00.871241 | orchestrator | 2026-02-16 03:37:00.871256 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-16 03:37:00.871266 | orchestrator | Monday 16 February 2026 03:37:00 +0000 (0:00:00.374) 0:00:34.834 ******* 2026-02-16 03:37:00.871274 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:37:00.871283 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:37:00.871291 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:37:00.871300 | orchestrator | 2026-02-16 03:37:00.871309 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-16 03:37:00.871327 | orchestrator | Monday 16 February 2026 03:37:00 +0000 (0:00:00.342) 0:00:35.176 ******* 2026-02-16 03:37:19.621432 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-16 03:37:19.621573 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-16 03:37:19.621592 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-16 03:37:19.621604 | orchestrator | 2026-02-16 03:37:19.621617 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-16 03:37:19.621629 | orchestrator | Monday 16 February 2026 03:37:01 +0000 (0:00:00.939) 0:00:36.116 ******* 2026-02-16 03:37:19.621641 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-16 03:37:19.621714 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 03:37:19.621729 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 03:37:19.621740 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-16 03:37:19.621752 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-16 03:37:19.621763 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-16 03:37:19.621774 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-16 03:37:19.621784 | orchestrator | 2026-02-16 03:37:19.621821 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-16 03:37:19.621833 | orchestrator | Monday 16 February 2026 03:37:02 +0000 (0:00:00.794) 0:00:36.910 ******* 2026-02-16 03:37:19.621843 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-16 03:37:19.621854 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 03:37:19.621865 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 03:37:19.621876 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-16 03:37:19.621887 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-16 03:37:19.621897 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-16 03:37:19.621908 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-16 03:37:19.621919 | orchestrator | 2026-02-16 03:37:19.621929 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-16 03:37:19.621940 | orchestrator | Monday 16 February 2026 03:37:04 +0000 (0:00:01.895) 0:00:38.806 ******* 2026-02-16 03:37:19.621952 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:37:19.621964 | orchestrator | 2026-02-16 03:37:19.621975 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-16 03:37:19.621986 | orchestrator | Monday 16 February 2026 03:37:05 +0000 (0:00:01.207) 0:00:40.013 ******* 2026-02-16 03:37:19.621997 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:37:19.622008 | orchestrator | 2026-02-16 03:37:19.622074 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-16 03:37:19.622087 | orchestrator | Monday 16 February 2026 03:37:06 +0000 (0:00:01.174) 0:00:41.188 ******* 2026-02-16 03:37:19.622098 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:37:19.622109 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:37:19.622119 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:37:19.622131 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:37:19.622142 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:37:19.622152 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:37:19.622163 | orchestrator | 2026-02-16 03:37:19.622174 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-16 03:37:19.622185 | orchestrator | Monday 16 February 2026 03:37:08 +0000 (0:00:01.247) 0:00:42.435 ******* 2026-02-16 03:37:19.622195 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:37:19.622206 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:37:19.622217 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:37:19.622228 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:37:19.622238 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:37:19.622249 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:37:19.622260 | orchestrator | 2026-02-16 03:37:19.622271 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-16 03:37:19.622282 | orchestrator | Monday 16 February 2026 03:37:08 +0000 (0:00:00.719) 0:00:43.155 ******* 2026-02-16 03:37:19.622294 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:37:19.622313 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:37:19.622332 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:37:19.622350 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:37:19.622368 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:37:19.622386 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:37:19.622406 | orchestrator | 2026-02-16 03:37:19.622427 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-16 03:37:19.622447 | orchestrator | Monday 16 February 2026 03:37:09 +0000 (0:00:00.851) 0:00:44.006 ******* 2026-02-16 03:37:19.622478 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:37:19.622490 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:37:19.622501 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:37:19.622526 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:37:19.622537 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:37:19.622548 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:37:19.622559 | orchestrator | 2026-02-16 03:37:19.622570 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-16 03:37:19.622581 | orchestrator | Monday 16 February 2026 03:37:10 +0000 (0:00:00.720) 0:00:44.727 ******* 2026-02-16 03:37:19.622591 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:37:19.622602 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:37:19.622632 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:37:19.622643 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:37:19.622683 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:37:19.622696 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:37:19.622707 | orchestrator | 2026-02-16 03:37:19.622718 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-16 03:37:19.622729 | orchestrator | Monday 16 February 2026 03:37:11 +0000 (0:00:01.225) 0:00:45.952 ******* 2026-02-16 03:37:19.622740 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:37:19.622751 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:37:19.622762 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:37:19.622773 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:37:19.622783 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:37:19.622794 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:37:19.622805 | orchestrator | 2026-02-16 03:37:19.622816 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-16 03:37:19.622827 | orchestrator | Monday 16 February 2026 03:37:12 +0000 (0:00:00.585) 0:00:46.538 ******* 2026-02-16 03:37:19.622838 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:37:19.622848 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:37:19.622859 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:37:19.622870 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:37:19.622881 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:37:19.622892 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:37:19.622902 | orchestrator | 2026-02-16 03:37:19.622913 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-16 03:37:19.622924 | orchestrator | Monday 16 February 2026 03:37:12 +0000 (0:00:00.751) 0:00:47.289 ******* 2026-02-16 03:37:19.622935 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:37:19.622946 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:37:19.622957 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:37:19.622968 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:37:19.622978 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:37:19.622989 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:37:19.623000 | orchestrator | 2026-02-16 03:37:19.623011 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-16 03:37:19.623022 | orchestrator | Monday 16 February 2026 03:37:13 +0000 (0:00:00.990) 0:00:48.279 ******* 2026-02-16 03:37:19.623032 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:37:19.623043 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:37:19.623054 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:37:19.623064 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:37:19.623075 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:37:19.623086 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:37:19.623096 | orchestrator | 2026-02-16 03:37:19.623107 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-16 03:37:19.623118 | orchestrator | Monday 16 February 2026 03:37:15 +0000 (0:00:01.256) 0:00:49.536 ******* 2026-02-16 03:37:19.623129 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:37:19.623140 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:37:19.623151 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:37:19.623161 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:37:19.623179 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:37:19.623190 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:37:19.623201 | orchestrator | 2026-02-16 03:37:19.623212 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-16 03:37:19.623224 | orchestrator | Monday 16 February 2026 03:37:15 +0000 (0:00:00.581) 0:00:50.118 ******* 2026-02-16 03:37:19.623234 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:37:19.623245 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:37:19.623256 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:37:19.623266 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:37:19.623277 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:37:19.623289 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:37:19.623299 | orchestrator | 2026-02-16 03:37:19.623310 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-16 03:37:19.623321 | orchestrator | Monday 16 February 2026 03:37:16 +0000 (0:00:00.800) 0:00:50.918 ******* 2026-02-16 03:37:19.623332 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:37:19.623343 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:37:19.623354 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:37:19.623364 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:37:19.623375 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:37:19.623386 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:37:19.623397 | orchestrator | 2026-02-16 03:37:19.623408 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-16 03:37:19.623418 | orchestrator | Monday 16 February 2026 03:37:17 +0000 (0:00:00.562) 0:00:51.481 ******* 2026-02-16 03:37:19.623429 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:37:19.623440 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:37:19.623451 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:37:19.623462 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:37:19.623472 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:37:19.623483 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:37:19.623494 | orchestrator | 2026-02-16 03:37:19.623505 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-16 03:37:19.623516 | orchestrator | Monday 16 February 2026 03:37:17 +0000 (0:00:00.772) 0:00:52.253 ******* 2026-02-16 03:37:19.623526 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:37:19.623537 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:37:19.623548 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:37:19.623558 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:37:19.623569 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:37:19.623580 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:37:19.623591 | orchestrator | 2026-02-16 03:37:19.623602 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-16 03:37:19.623613 | orchestrator | Monday 16 February 2026 03:37:18 +0000 (0:00:00.601) 0:00:52.855 ******* 2026-02-16 03:37:19.623624 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:37:19.623634 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:37:19.623651 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:37:19.623690 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:37:19.623709 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:37:19.623727 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:37:19.623745 | orchestrator | 2026-02-16 03:37:19.623758 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-16 03:37:19.623769 | orchestrator | Monday 16 February 2026 03:37:19 +0000 (0:00:00.805) 0:00:53.660 ******* 2026-02-16 03:37:19.623780 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:37:19.623798 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:38:32.550291 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:38:32.550434 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:38:32.550450 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:38:32.550462 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:38:32.550474 | orchestrator | 2026-02-16 03:38:32.550487 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-16 03:38:32.550525 | orchestrator | Monday 16 February 2026 03:37:19 +0000 (0:00:00.586) 0:00:54.247 ******* 2026-02-16 03:38:32.550536 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:38:32.550547 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:38:32.550558 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:38:32.550569 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:38:32.550582 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:38:32.550593 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:38:32.550604 | orchestrator | 2026-02-16 03:38:32.550615 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-16 03:38:32.550626 | orchestrator | Monday 16 February 2026 03:37:20 +0000 (0:00:00.829) 0:00:55.077 ******* 2026-02-16 03:38:32.550637 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:38:32.550648 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:38:32.550658 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:38:32.550669 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:38:32.550680 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:38:32.550690 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:38:32.550701 | orchestrator | 2026-02-16 03:38:32.550712 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-16 03:38:32.550723 | orchestrator | Monday 16 February 2026 03:37:21 +0000 (0:00:00.594) 0:00:55.671 ******* 2026-02-16 03:38:32.550734 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:38:32.550770 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:38:32.550781 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:38:32.550792 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:38:32.550804 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:38:32.550816 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:38:32.550829 | orchestrator | 2026-02-16 03:38:32.550841 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-16 03:38:32.550854 | orchestrator | Monday 16 February 2026 03:37:22 +0000 (0:00:01.233) 0:00:56.905 ******* 2026-02-16 03:38:32.550867 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:38:32.550880 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:38:32.550892 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:38:32.550905 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:38:32.550918 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:38:32.550931 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:38:32.550943 | orchestrator | 2026-02-16 03:38:32.550956 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-16 03:38:32.550969 | orchestrator | Monday 16 February 2026 03:37:24 +0000 (0:00:01.753) 0:00:58.659 ******* 2026-02-16 03:38:32.550981 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:38:32.550994 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:38:32.551006 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:38:32.551019 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:38:32.551032 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:38:32.551043 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:38:32.551054 | orchestrator | 2026-02-16 03:38:32.551065 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-16 03:38:32.551076 | orchestrator | Monday 16 February 2026 03:37:26 +0000 (0:00:02.147) 0:01:00.806 ******* 2026-02-16 03:38:32.551088 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:38:32.551101 | orchestrator | 2026-02-16 03:38:32.551112 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-16 03:38:32.551123 | orchestrator | Monday 16 February 2026 03:37:27 +0000 (0:00:01.202) 0:01:02.008 ******* 2026-02-16 03:38:32.551133 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:38:32.551144 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:38:32.551155 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:38:32.551166 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:38:32.551177 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:38:32.551188 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:38:32.551207 | orchestrator | 2026-02-16 03:38:32.551218 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-16 03:38:32.551229 | orchestrator | Monday 16 February 2026 03:37:28 +0000 (0:00:00.605) 0:01:02.614 ******* 2026-02-16 03:38:32.551240 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:38:32.551250 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:38:32.551261 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:38:32.551272 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:38:32.551283 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:38:32.551293 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:38:32.551304 | orchestrator | 2026-02-16 03:38:32.551315 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-16 03:38:32.551326 | orchestrator | Monday 16 February 2026 03:37:29 +0000 (0:00:00.808) 0:01:03.422 ******* 2026-02-16 03:38:32.551337 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-16 03:38:32.551348 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-16 03:38:32.551359 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-16 03:38:32.551370 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-16 03:38:32.551381 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-16 03:38:32.551405 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-16 03:38:32.551417 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-16 03:38:32.551428 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-16 03:38:32.551439 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-16 03:38:32.551469 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-16 03:38:32.551481 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-16 03:38:32.551492 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-16 03:38:32.551503 | orchestrator | 2026-02-16 03:38:32.551514 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-16 03:38:32.551525 | orchestrator | Monday 16 February 2026 03:37:30 +0000 (0:00:01.293) 0:01:04.716 ******* 2026-02-16 03:38:32.551536 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:38:32.551547 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:38:32.551558 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:38:32.551568 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:38:32.551579 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:38:32.551590 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:38:32.551601 | orchestrator | 2026-02-16 03:38:32.551612 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-16 03:38:32.551623 | orchestrator | Monday 16 February 2026 03:37:31 +0000 (0:00:01.137) 0:01:05.854 ******* 2026-02-16 03:38:32.551633 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:38:32.551644 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:38:32.551655 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:38:32.551665 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:38:32.551676 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:38:32.551687 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:38:32.551698 | orchestrator | 2026-02-16 03:38:32.551709 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-16 03:38:32.551720 | orchestrator | Monday 16 February 2026 03:37:32 +0000 (0:00:00.596) 0:01:06.450 ******* 2026-02-16 03:38:32.551730 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:38:32.551758 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:38:32.551769 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:38:32.551787 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:38:32.551798 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:38:32.551809 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:38:32.551820 | orchestrator | 2026-02-16 03:38:32.551831 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-16 03:38:32.551842 | orchestrator | Monday 16 February 2026 03:37:32 +0000 (0:00:00.775) 0:01:07.226 ******* 2026-02-16 03:38:32.551852 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:38:32.551863 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:38:32.551874 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:38:32.551885 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:38:32.551896 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:38:32.551907 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:38:32.551917 | orchestrator | 2026-02-16 03:38:32.551929 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-16 03:38:32.551939 | orchestrator | Monday 16 February 2026 03:37:33 +0000 (0:00:00.604) 0:01:07.831 ******* 2026-02-16 03:38:32.551951 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:38:32.551962 | orchestrator | 2026-02-16 03:38:32.551973 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-16 03:38:32.551984 | orchestrator | Monday 16 February 2026 03:37:34 +0000 (0:00:01.237) 0:01:09.068 ******* 2026-02-16 03:38:32.551995 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:38:32.552006 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:38:32.552017 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:38:32.552028 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:38:32.552038 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:38:32.552050 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:38:32.552060 | orchestrator | 2026-02-16 03:38:32.552071 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-16 03:38:32.552083 | orchestrator | Monday 16 February 2026 03:38:31 +0000 (0:00:57.128) 0:02:06.197 ******* 2026-02-16 03:38:32.552094 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-16 03:38:32.552105 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-16 03:38:32.552115 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-16 03:38:32.552126 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:38:32.552137 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-16 03:38:32.552148 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-16 03:38:32.552159 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-16 03:38:32.552170 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:38:32.552181 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-16 03:38:32.552192 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-16 03:38:32.552202 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-16 03:38:32.552213 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:38:32.552224 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-16 03:38:32.552235 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-16 03:38:32.552252 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-16 03:38:32.552263 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:38:32.552274 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-16 03:38:32.552285 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-16 03:38:32.552296 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-16 03:38:32.552319 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:38:55.745682 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-16 03:38:55.745904 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-16 03:38:55.745916 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-16 03:38:55.745921 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:38:55.745927 | orchestrator | 2026-02-16 03:38:55.745955 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-16 03:38:55.745961 | orchestrator | Monday 16 February 2026 03:38:32 +0000 (0:00:00.659) 0:02:06.857 ******* 2026-02-16 03:38:55.745966 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:38:55.745970 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:38:55.745974 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:38:55.745979 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:38:55.745983 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:38:55.745988 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:38:55.745992 | orchestrator | 2026-02-16 03:38:55.745997 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-16 03:38:55.746001 | orchestrator | Monday 16 February 2026 03:38:33 +0000 (0:00:00.797) 0:02:07.654 ******* 2026-02-16 03:38:55.746005 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:38:55.746009 | orchestrator | 2026-02-16 03:38:55.746013 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-16 03:38:55.746150 | orchestrator | Monday 16 February 2026 03:38:33 +0000 (0:00:00.151) 0:02:07.806 ******* 2026-02-16 03:38:55.746156 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:38:55.746160 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:38:55.746165 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:38:55.746169 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:38:55.746173 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:38:55.746177 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:38:55.746181 | orchestrator | 2026-02-16 03:38:55.746185 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-16 03:38:55.746194 | orchestrator | Monday 16 February 2026 03:38:34 +0000 (0:00:00.593) 0:02:08.399 ******* 2026-02-16 03:38:55.746199 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:38:55.746203 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:38:55.746207 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:38:55.746211 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:38:55.746215 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:38:55.746219 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:38:55.746223 | orchestrator | 2026-02-16 03:38:55.746227 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-16 03:38:55.746231 | orchestrator | Monday 16 February 2026 03:38:34 +0000 (0:00:00.829) 0:02:09.229 ******* 2026-02-16 03:38:55.746236 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:38:55.746243 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:38:55.746249 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:38:55.746256 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:38:55.746263 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:38:55.746269 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:38:55.746276 | orchestrator | 2026-02-16 03:38:55.746283 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-16 03:38:55.746290 | orchestrator | Monday 16 February 2026 03:38:35 +0000 (0:00:00.666) 0:02:09.895 ******* 2026-02-16 03:38:55.746296 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:38:55.746304 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:38:55.746313 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:38:55.746317 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:38:55.746321 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:38:55.746325 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:38:55.746330 | orchestrator | 2026-02-16 03:38:55.746334 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-16 03:38:55.746354 | orchestrator | Monday 16 February 2026 03:38:39 +0000 (0:00:03.489) 0:02:13.385 ******* 2026-02-16 03:38:55.746359 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:38:55.746363 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:38:55.746367 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:38:55.746371 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:38:55.746394 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:38:55.746399 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:38:55.746403 | orchestrator | 2026-02-16 03:38:55.746408 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-16 03:38:55.746412 | orchestrator | Monday 16 February 2026 03:38:39 +0000 (0:00:00.597) 0:02:13.982 ******* 2026-02-16 03:38:55.746417 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:38:55.746423 | orchestrator | 2026-02-16 03:38:55.746427 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-16 03:38:55.746431 | orchestrator | Monday 16 February 2026 03:38:40 +0000 (0:00:01.237) 0:02:15.220 ******* 2026-02-16 03:38:55.746435 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:38:55.746439 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:38:55.746443 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:38:55.746447 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:38:55.746451 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:38:55.746455 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:38:55.746459 | orchestrator | 2026-02-16 03:38:55.746464 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-16 03:38:55.746492 | orchestrator | Monday 16 February 2026 03:38:41 +0000 (0:00:00.835) 0:02:16.055 ******* 2026-02-16 03:38:55.746688 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:38:55.746702 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:38:55.746707 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:38:55.746711 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:38:55.746715 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:38:55.746719 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:38:55.746723 | orchestrator | 2026-02-16 03:38:55.746727 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-16 03:38:55.746731 | orchestrator | Monday 16 February 2026 03:38:42 +0000 (0:00:00.604) 0:02:16.660 ******* 2026-02-16 03:38:55.746735 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:38:55.746752 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:38:55.746757 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:38:55.746761 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:38:55.746765 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:38:55.746783 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:38:55.746787 | orchestrator | 2026-02-16 03:38:55.746791 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-16 03:38:55.746796 | orchestrator | Monday 16 February 2026 03:38:43 +0000 (0:00:00.824) 0:02:17.485 ******* 2026-02-16 03:38:55.746800 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:38:55.746804 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:38:55.746808 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:38:55.746812 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:38:55.746816 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:38:55.746820 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:38:55.746824 | orchestrator | 2026-02-16 03:38:55.746828 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-16 03:38:55.746832 | orchestrator | Monday 16 February 2026 03:38:43 +0000 (0:00:00.587) 0:02:18.072 ******* 2026-02-16 03:38:55.746836 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:38:55.746840 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:38:55.746844 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:38:55.746848 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:38:55.746859 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:38:55.746863 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:38:55.746867 | orchestrator | 2026-02-16 03:38:55.746871 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-16 03:38:55.746875 | orchestrator | Monday 16 February 2026 03:38:44 +0000 (0:00:00.805) 0:02:18.878 ******* 2026-02-16 03:38:55.746880 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:38:55.746884 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:38:55.746888 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:38:55.746892 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:38:55.746896 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:38:55.746900 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:38:55.746904 | orchestrator | 2026-02-16 03:38:55.746908 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-16 03:38:55.746912 | orchestrator | Monday 16 February 2026 03:38:45 +0000 (0:00:00.592) 0:02:19.471 ******* 2026-02-16 03:38:55.746916 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:38:55.746920 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:38:55.746924 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:38:55.746928 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:38:55.746932 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:38:55.746936 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:38:55.746940 | orchestrator | 2026-02-16 03:38:55.746945 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-16 03:38:55.746949 | orchestrator | Monday 16 February 2026 03:38:45 +0000 (0:00:00.809) 0:02:20.281 ******* 2026-02-16 03:38:55.746953 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:38:55.746957 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:38:55.746961 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:38:55.746965 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:38:55.746969 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:38:55.746973 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:38:55.746977 | orchestrator | 2026-02-16 03:38:55.746981 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-16 03:38:55.746985 | orchestrator | Monday 16 February 2026 03:38:46 +0000 (0:00:00.599) 0:02:20.881 ******* 2026-02-16 03:38:55.746990 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:38:55.746994 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:38:55.746998 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:38:55.747002 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:38:55.747006 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:38:55.747010 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:38:55.747014 | orchestrator | 2026-02-16 03:38:55.747018 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-16 03:38:55.747022 | orchestrator | Monday 16 February 2026 03:38:47 +0000 (0:00:01.344) 0:02:22.225 ******* 2026-02-16 03:38:55.747027 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:38:55.747033 | orchestrator | 2026-02-16 03:38:55.747037 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-16 03:38:55.747041 | orchestrator | Monday 16 February 2026 03:38:49 +0000 (0:00:01.378) 0:02:23.603 ******* 2026-02-16 03:38:55.747046 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-16 03:38:55.747050 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-16 03:38:55.747054 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-16 03:38:55.747058 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-16 03:38:55.747062 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-16 03:38:55.747067 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-16 03:38:55.747071 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-16 03:38:55.747075 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-16 03:38:55.747082 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-16 03:38:55.747086 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-16 03:38:55.747090 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-16 03:38:55.747098 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-16 03:38:55.747102 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-16 03:38:55.747106 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-16 03:38:55.747110 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-16 03:38:55.747114 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-16 03:38:55.747118 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-16 03:38:55.747125 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-16 03:39:01.258063 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-16 03:39:01.258178 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-16 03:39:01.258194 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-16 03:39:01.258207 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-16 03:39:01.258218 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-16 03:39:01.258229 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-16 03:39:01.258240 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-16 03:39:01.258251 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-16 03:39:01.258262 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-16 03:39:01.258273 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-16 03:39:01.258284 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-16 03:39:01.258295 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-16 03:39:01.258306 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-16 03:39:01.258317 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-16 03:39:01.258327 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-16 03:39:01.258338 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-16 03:39:01.258350 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-16 03:39:01.258361 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-16 03:39:01.258372 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-16 03:39:01.258383 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-16 03:39:01.258394 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-16 03:39:01.258405 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-16 03:39:01.258416 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-16 03:39:01.258427 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-16 03:39:01.258437 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-16 03:39:01.258448 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-16 03:39:01.258459 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-16 03:39:01.258470 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-16 03:39:01.258481 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-16 03:39:01.258492 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-16 03:39:01.258503 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-16 03:39:01.258514 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-16 03:39:01.258525 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-16 03:39:01.258561 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-16 03:39:01.258575 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-16 03:39:01.258588 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-16 03:39:01.258601 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-16 03:39:01.258613 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-16 03:39:01.258626 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-16 03:39:01.258653 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-16 03:39:01.258667 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-16 03:39:01.258679 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-16 03:39:01.258692 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-16 03:39:01.258705 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-16 03:39:01.258717 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-16 03:39:01.258730 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-16 03:39:01.258742 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-16 03:39:01.258754 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-16 03:39:01.258767 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-16 03:39:01.258806 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-16 03:39:01.258820 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-16 03:39:01.258833 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-16 03:39:01.258859 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-16 03:39:01.258873 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-16 03:39:01.258885 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-16 03:39:01.258898 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-16 03:39:01.258911 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-16 03:39:01.258924 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-16 03:39:01.258956 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-16 03:39:01.258968 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-16 03:39:01.258979 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-16 03:39:01.258990 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-16 03:39:01.259002 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-16 03:39:01.259013 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-16 03:39:01.259024 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-16 03:39:01.259035 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-16 03:39:01.259046 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-16 03:39:01.259057 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-16 03:39:01.259068 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-16 03:39:01.259093 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-16 03:39:01.259115 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-16 03:39:01.259126 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-16 03:39:01.259137 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-16 03:39:01.259155 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-16 03:39:01.259166 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-16 03:39:01.259177 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-16 03:39:01.259188 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-16 03:39:01.259199 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-16 03:39:01.259209 | orchestrator | 2026-02-16 03:39:01.259223 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-16 03:39:01.259234 | orchestrator | Monday 16 February 2026 03:38:55 +0000 (0:00:06.437) 0:02:30.041 ******* 2026-02-16 03:39:01.259245 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:01.259256 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:01.259267 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:01.259278 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:39:01.259291 | orchestrator | 2026-02-16 03:39:01.259302 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-16 03:39:01.259313 | orchestrator | Monday 16 February 2026 03:38:56 +0000 (0:00:01.129) 0:02:31.170 ******* 2026-02-16 03:39:01.259324 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-16 03:39:01.259336 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-16 03:39:01.259347 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-16 03:39:01.259358 | orchestrator | 2026-02-16 03:39:01.259369 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-16 03:39:01.259380 | orchestrator | Monday 16 February 2026 03:38:57 +0000 (0:00:00.792) 0:02:31.963 ******* 2026-02-16 03:39:01.259391 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-16 03:39:01.259402 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-16 03:39:01.259413 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-16 03:39:01.259424 | orchestrator | 2026-02-16 03:39:01.259435 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-16 03:39:01.259446 | orchestrator | Monday 16 February 2026 03:38:58 +0000 (0:00:01.179) 0:02:33.142 ******* 2026-02-16 03:39:01.259457 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:39:01.259468 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:39:01.259479 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:39:01.259489 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:01.259500 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:01.259511 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:01.259522 | orchestrator | 2026-02-16 03:39:01.259533 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-16 03:39:01.259544 | orchestrator | Monday 16 February 2026 03:38:59 +0000 (0:00:00.895) 0:02:34.037 ******* 2026-02-16 03:39:01.259555 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:39:01.259566 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:39:01.259576 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:39:01.259587 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:01.259598 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:01.259609 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:01.259620 | orchestrator | 2026-02-16 03:39:01.259636 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-16 03:39:01.259647 | orchestrator | Monday 16 February 2026 03:39:00 +0000 (0:00:00.653) 0:02:34.691 ******* 2026-02-16 03:39:01.259678 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:01.259689 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:39:01.259700 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:39:01.259711 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:01.259722 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:01.259733 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:01.259744 | orchestrator | 2026-02-16 03:39:01.259762 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-16 03:39:14.901660 | orchestrator | Monday 16 February 2026 03:39:01 +0000 (0:00:00.874) 0:02:35.566 ******* 2026-02-16 03:39:14.901842 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:14.901862 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:39:14.901873 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:39:14.901883 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:14.901894 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:14.901903 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:14.901913 | orchestrator | 2026-02-16 03:39:14.901925 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-16 03:39:14.901935 | orchestrator | Monday 16 February 2026 03:39:01 +0000 (0:00:00.621) 0:02:36.188 ******* 2026-02-16 03:39:14.901945 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:14.901954 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:39:14.901964 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:39:14.901974 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:14.901983 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:14.901993 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:14.902002 | orchestrator | 2026-02-16 03:39:14.902013 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-16 03:39:14.902079 | orchestrator | Monday 16 February 2026 03:39:02 +0000 (0:00:00.883) 0:02:37.072 ******* 2026-02-16 03:39:14.902090 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:14.902099 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:39:14.902109 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:39:14.902119 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:14.902129 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:14.902138 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:14.902148 | orchestrator | 2026-02-16 03:39:14.902158 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-16 03:39:14.902168 | orchestrator | Monday 16 February 2026 03:39:03 +0000 (0:00:00.593) 0:02:37.665 ******* 2026-02-16 03:39:14.902177 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:14.902187 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:39:14.902198 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:39:14.902210 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:14.902222 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:14.902233 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:14.902244 | orchestrator | 2026-02-16 03:39:14.902255 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-16 03:39:14.902273 | orchestrator | Monday 16 February 2026 03:39:04 +0000 (0:00:00.875) 0:02:38.541 ******* 2026-02-16 03:39:14.902290 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:14.902306 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:39:14.902322 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:39:14.902338 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:14.902354 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:14.902371 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:14.902386 | orchestrator | 2026-02-16 03:39:14.902403 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-16 03:39:14.902421 | orchestrator | Monday 16 February 2026 03:39:04 +0000 (0:00:00.606) 0:02:39.148 ******* 2026-02-16 03:39:14.902470 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:14.902490 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:14.902506 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:14.902523 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:39:14.902542 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:39:14.902559 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:39:14.902575 | orchestrator | 2026-02-16 03:39:14.902592 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-16 03:39:14.902609 | orchestrator | Monday 16 February 2026 03:39:08 +0000 (0:00:03.196) 0:02:42.344 ******* 2026-02-16 03:39:14.902626 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:39:14.902643 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:39:14.902660 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:39:14.902676 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:14.902691 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:14.902701 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:14.902710 | orchestrator | 2026-02-16 03:39:14.902720 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-16 03:39:14.902730 | orchestrator | Monday 16 February 2026 03:39:08 +0000 (0:00:00.616) 0:02:42.960 ******* 2026-02-16 03:39:14.902739 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:39:14.902749 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:39:14.902758 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:39:14.902768 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:14.902777 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:14.902787 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:14.902835 | orchestrator | 2026-02-16 03:39:14.902845 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-16 03:39:14.902855 | orchestrator | Monday 16 February 2026 03:39:09 +0000 (0:00:00.902) 0:02:43.863 ******* 2026-02-16 03:39:14.902865 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:14.902875 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:39:14.902884 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:39:14.902894 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:14.902904 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:14.902914 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:14.902923 | orchestrator | 2026-02-16 03:39:14.902933 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-16 03:39:14.902957 | orchestrator | Monday 16 February 2026 03:39:10 +0000 (0:00:00.581) 0:02:44.444 ******* 2026-02-16 03:39:14.902969 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-16 03:39:14.902981 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-16 03:39:14.902990 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-16 03:39:14.903000 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:14.903030 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:14.903041 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:14.903050 | orchestrator | 2026-02-16 03:39:14.903060 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-16 03:39:14.903070 | orchestrator | Monday 16 February 2026 03:39:10 +0000 (0:00:00.877) 0:02:45.322 ******* 2026-02-16 03:39:14.903082 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-16 03:39:14.903096 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-16 03:39:14.903118 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:14.903129 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-16 03:39:14.903139 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-16 03:39:14.903149 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:39:14.903159 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-16 03:39:14.903169 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-16 03:39:14.903179 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:39:14.903188 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:14.903198 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:14.903208 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:14.903217 | orchestrator | 2026-02-16 03:39:14.903227 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-16 03:39:14.903237 | orchestrator | Monday 16 February 2026 03:39:11 +0000 (0:00:00.682) 0:02:46.005 ******* 2026-02-16 03:39:14.903247 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:14.903257 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:39:14.903266 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:39:14.903276 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:14.903285 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:14.903295 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:14.903304 | orchestrator | 2026-02-16 03:39:14.903314 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-16 03:39:14.903324 | orchestrator | Monday 16 February 2026 03:39:12 +0000 (0:00:00.890) 0:02:46.895 ******* 2026-02-16 03:39:14.903337 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:14.903354 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:39:14.903372 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:39:14.903393 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:14.903416 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:14.903431 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:14.903448 | orchestrator | 2026-02-16 03:39:14.903465 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-16 03:39:14.903481 | orchestrator | Monday 16 February 2026 03:39:13 +0000 (0:00:00.815) 0:02:47.711 ******* 2026-02-16 03:39:14.903498 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:14.903514 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:39:14.903531 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:39:14.903548 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:14.903565 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:14.903581 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:14.903599 | orchestrator | 2026-02-16 03:39:14.903626 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-16 03:39:14.903642 | orchestrator | Monday 16 February 2026 03:39:14 +0000 (0:00:00.659) 0:02:48.370 ******* 2026-02-16 03:39:14.903671 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:14.903689 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:39:14.903707 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:39:14.903723 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:14.903742 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:14.903760 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:14.903777 | orchestrator | 2026-02-16 03:39:14.903821 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-16 03:39:14.903850 | orchestrator | Monday 16 February 2026 03:39:14 +0000 (0:00:00.833) 0:02:49.204 ******* 2026-02-16 03:39:31.810342 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:31.810462 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:39:31.810479 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:39:31.810491 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:31.810502 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:31.810513 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:31.810524 | orchestrator | 2026-02-16 03:39:31.810537 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-16 03:39:31.810549 | orchestrator | Monday 16 February 2026 03:39:15 +0000 (0:00:00.633) 0:02:49.837 ******* 2026-02-16 03:39:31.810560 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:39:31.810572 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:39:31.810582 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:39:31.810593 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:31.810604 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:31.810615 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:31.810626 | orchestrator | 2026-02-16 03:39:31.810637 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-16 03:39:31.810648 | orchestrator | Monday 16 February 2026 03:39:16 +0000 (0:00:00.858) 0:02:50.695 ******* 2026-02-16 03:39:31.810659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 03:39:31.810670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 03:39:31.810681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 03:39:31.810691 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:31.810702 | orchestrator | 2026-02-16 03:39:31.810713 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-16 03:39:31.810724 | orchestrator | Monday 16 February 2026 03:39:16 +0000 (0:00:00.450) 0:02:51.146 ******* 2026-02-16 03:39:31.810736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 03:39:31.810747 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 03:39:31.810757 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 03:39:31.810768 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:31.810779 | orchestrator | 2026-02-16 03:39:31.810790 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-16 03:39:31.810801 | orchestrator | Monday 16 February 2026 03:39:17 +0000 (0:00:00.399) 0:02:51.545 ******* 2026-02-16 03:39:31.810811 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 03:39:31.810853 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 03:39:31.810866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 03:39:31.810878 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:31.810891 | orchestrator | 2026-02-16 03:39:31.810904 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-16 03:39:31.810916 | orchestrator | Monday 16 February 2026 03:39:17 +0000 (0:00:00.436) 0:02:51.981 ******* 2026-02-16 03:39:31.810929 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:39:31.810941 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:39:31.810954 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:39:31.810966 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:31.810978 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:31.810991 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:31.811025 | orchestrator | 2026-02-16 03:39:31.811038 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-16 03:39:31.811050 | orchestrator | Monday 16 February 2026 03:39:18 +0000 (0:00:00.629) 0:02:52.611 ******* 2026-02-16 03:39:31.811063 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-16 03:39:31.811075 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-16 03:39:31.811087 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-16 03:39:31.811100 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-16 03:39:31.811113 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:31.811125 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-16 03:39:31.811136 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:31.811147 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-16 03:39:31.811157 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:31.811168 | orchestrator | 2026-02-16 03:39:31.811179 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-16 03:39:31.811190 | orchestrator | Monday 16 February 2026 03:39:20 +0000 (0:00:01.795) 0:02:54.407 ******* 2026-02-16 03:39:31.811201 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:39:31.811211 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:39:31.811222 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:39:31.811233 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:39:31.811243 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:39:31.811254 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:39:31.811265 | orchestrator | 2026-02-16 03:39:31.811275 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-16 03:39:31.811286 | orchestrator | Monday 16 February 2026 03:39:22 +0000 (0:00:02.612) 0:02:57.020 ******* 2026-02-16 03:39:31.811297 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:39:31.811308 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:39:31.811318 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:39:31.811329 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:39:31.811339 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:39:31.811350 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:39:31.811361 | orchestrator | 2026-02-16 03:39:31.811372 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-16 03:39:31.811397 | orchestrator | Monday 16 February 2026 03:39:23 +0000 (0:00:01.018) 0:02:58.038 ******* 2026-02-16 03:39:31.811408 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:31.811419 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:39:31.811429 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:39:31.811441 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:39:31.811452 | orchestrator | 2026-02-16 03:39:31.811463 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-16 03:39:31.811474 | orchestrator | Monday 16 February 2026 03:39:24 +0000 (0:00:01.057) 0:02:59.096 ******* 2026-02-16 03:39:31.811485 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:39:31.811513 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:39:31.811524 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:39:31.811535 | orchestrator | 2026-02-16 03:39:31.811546 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-16 03:39:31.811557 | orchestrator | Monday 16 February 2026 03:39:25 +0000 (0:00:00.321) 0:02:59.417 ******* 2026-02-16 03:39:31.811568 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:39:31.811578 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:39:31.811589 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:39:31.811600 | orchestrator | 2026-02-16 03:39:31.811610 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-16 03:39:31.811621 | orchestrator | Monday 16 February 2026 03:39:26 +0000 (0:00:01.451) 0:03:00.868 ******* 2026-02-16 03:39:31.811632 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-16 03:39:31.811642 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-16 03:39:31.811661 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-16 03:39:31.811672 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:31.811683 | orchestrator | 2026-02-16 03:39:31.811693 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-16 03:39:31.811704 | orchestrator | Monday 16 February 2026 03:39:27 +0000 (0:00:00.658) 0:03:01.527 ******* 2026-02-16 03:39:31.811715 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:39:31.811725 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:39:31.811736 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:39:31.811747 | orchestrator | 2026-02-16 03:39:31.811758 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-16 03:39:31.811769 | orchestrator | Monday 16 February 2026 03:39:27 +0000 (0:00:00.344) 0:03:01.872 ******* 2026-02-16 03:39:31.811780 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:31.811790 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:31.811801 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:31.811812 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:39:31.811853 | orchestrator | 2026-02-16 03:39:31.811864 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-16 03:39:31.811875 | orchestrator | Monday 16 February 2026 03:39:28 +0000 (0:00:01.066) 0:03:02.939 ******* 2026-02-16 03:39:31.811886 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 03:39:31.811897 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 03:39:31.811907 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 03:39:31.811918 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:31.811929 | orchestrator | 2026-02-16 03:39:31.811939 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-16 03:39:31.811950 | orchestrator | Monday 16 February 2026 03:39:29 +0000 (0:00:00.402) 0:03:03.342 ******* 2026-02-16 03:39:31.811961 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:31.811972 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:39:31.811983 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:39:31.811993 | orchestrator | 2026-02-16 03:39:31.812004 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-16 03:39:31.812015 | orchestrator | Monday 16 February 2026 03:39:29 +0000 (0:00:00.312) 0:03:03.654 ******* 2026-02-16 03:39:31.812025 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:31.812036 | orchestrator | 2026-02-16 03:39:31.812047 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-16 03:39:31.812058 | orchestrator | Monday 16 February 2026 03:39:29 +0000 (0:00:00.238) 0:03:03.892 ******* 2026-02-16 03:39:31.812068 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:31.812079 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:39:31.812090 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:39:31.812101 | orchestrator | 2026-02-16 03:39:31.812111 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-16 03:39:31.812122 | orchestrator | Monday 16 February 2026 03:39:30 +0000 (0:00:00.529) 0:03:04.421 ******* 2026-02-16 03:39:31.812133 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:31.812143 | orchestrator | 2026-02-16 03:39:31.812154 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-16 03:39:31.812165 | orchestrator | Monday 16 February 2026 03:39:30 +0000 (0:00:00.239) 0:03:04.661 ******* 2026-02-16 03:39:31.812176 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:31.812187 | orchestrator | 2026-02-16 03:39:31.812197 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-16 03:39:31.812208 | orchestrator | Monday 16 February 2026 03:39:30 +0000 (0:00:00.230) 0:03:04.892 ******* 2026-02-16 03:39:31.812219 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:31.812230 | orchestrator | 2026-02-16 03:39:31.812240 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-16 03:39:31.812259 | orchestrator | Monday 16 February 2026 03:39:30 +0000 (0:00:00.148) 0:03:05.040 ******* 2026-02-16 03:39:31.812270 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:31.812281 | orchestrator | 2026-02-16 03:39:31.812292 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-16 03:39:31.812302 | orchestrator | Monday 16 February 2026 03:39:30 +0000 (0:00:00.234) 0:03:05.275 ******* 2026-02-16 03:39:31.812313 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:31.812324 | orchestrator | 2026-02-16 03:39:31.812340 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-16 03:39:31.812351 | orchestrator | Monday 16 February 2026 03:39:31 +0000 (0:00:00.233) 0:03:05.508 ******* 2026-02-16 03:39:31.812362 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 03:39:31.812373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 03:39:31.812384 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 03:39:31.812394 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:31.812405 | orchestrator | 2026-02-16 03:39:31.812416 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-16 03:39:31.812427 | orchestrator | Monday 16 February 2026 03:39:31 +0000 (0:00:00.420) 0:03:05.929 ******* 2026-02-16 03:39:31.812444 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:50.200573 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:39:50.200690 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:39:50.200705 | orchestrator | 2026-02-16 03:39:50.200719 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-16 03:39:50.200732 | orchestrator | Monday 16 February 2026 03:39:31 +0000 (0:00:00.320) 0:03:06.250 ******* 2026-02-16 03:39:50.200744 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:50.200755 | orchestrator | 2026-02-16 03:39:50.200767 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-16 03:39:50.200778 | orchestrator | Monday 16 February 2026 03:39:32 +0000 (0:00:00.247) 0:03:06.497 ******* 2026-02-16 03:39:50.200789 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:50.200800 | orchestrator | 2026-02-16 03:39:50.200811 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-16 03:39:50.200822 | orchestrator | Monday 16 February 2026 03:39:32 +0000 (0:00:00.225) 0:03:06.723 ******* 2026-02-16 03:39:50.200833 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:50.200913 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:50.200926 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:50.200938 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:39:50.200950 | orchestrator | 2026-02-16 03:39:50.200961 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-16 03:39:50.200972 | orchestrator | Monday 16 February 2026 03:39:33 +0000 (0:00:01.083) 0:03:07.806 ******* 2026-02-16 03:39:50.200983 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:39:50.200995 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:39:50.201006 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:39:50.201017 | orchestrator | 2026-02-16 03:39:50.201029 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-16 03:39:50.201040 | orchestrator | Monday 16 February 2026 03:39:33 +0000 (0:00:00.333) 0:03:08.139 ******* 2026-02-16 03:39:50.201052 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:39:50.201063 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:39:50.201074 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:39:50.201085 | orchestrator | 2026-02-16 03:39:50.201097 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-16 03:39:50.201109 | orchestrator | Monday 16 February 2026 03:39:35 +0000 (0:00:01.503) 0:03:09.643 ******* 2026-02-16 03:39:50.201122 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 03:39:50.201134 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 03:39:50.201171 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 03:39:50.201185 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:50.201197 | orchestrator | 2026-02-16 03:39:50.201210 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-16 03:39:50.201221 | orchestrator | Monday 16 February 2026 03:39:35 +0000 (0:00:00.625) 0:03:10.269 ******* 2026-02-16 03:39:50.201232 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:39:50.201243 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:39:50.201253 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:39:50.201264 | orchestrator | 2026-02-16 03:39:50.201275 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-16 03:39:50.201286 | orchestrator | Monday 16 February 2026 03:39:36 +0000 (0:00:00.353) 0:03:10.622 ******* 2026-02-16 03:39:50.201297 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:50.201308 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:50.201319 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:50.201329 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:39:50.201340 | orchestrator | 2026-02-16 03:39:50.201351 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-16 03:39:50.201362 | orchestrator | Monday 16 February 2026 03:39:37 +0000 (0:00:01.003) 0:03:11.626 ******* 2026-02-16 03:39:50.201372 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:39:50.201383 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:39:50.201394 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:39:50.201405 | orchestrator | 2026-02-16 03:39:50.201416 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-16 03:39:50.201457 | orchestrator | Monday 16 February 2026 03:39:37 +0000 (0:00:00.320) 0:03:11.946 ******* 2026-02-16 03:39:50.201468 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:39:50.201479 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:39:50.201490 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:39:50.201500 | orchestrator | 2026-02-16 03:39:50.201511 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-16 03:39:50.201522 | orchestrator | Monday 16 February 2026 03:39:38 +0000 (0:00:01.198) 0:03:13.145 ******* 2026-02-16 03:39:50.201533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 03:39:50.201544 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 03:39:50.201555 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 03:39:50.201565 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:50.201576 | orchestrator | 2026-02-16 03:39:50.201587 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-16 03:39:50.201598 | orchestrator | Monday 16 February 2026 03:39:39 +0000 (0:00:00.857) 0:03:14.003 ******* 2026-02-16 03:39:50.201622 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:39:50.201634 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:39:50.201645 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:39:50.201655 | orchestrator | 2026-02-16 03:39:50.201666 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-16 03:39:50.201677 | orchestrator | Monday 16 February 2026 03:39:40 +0000 (0:00:00.587) 0:03:14.590 ******* 2026-02-16 03:39:50.201688 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:50.201699 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:39:50.201710 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:39:50.201720 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:50.201732 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:50.201742 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:50.201753 | orchestrator | 2026-02-16 03:39:50.201782 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-16 03:39:50.201794 | orchestrator | Monday 16 February 2026 03:39:40 +0000 (0:00:00.638) 0:03:15.229 ******* 2026-02-16 03:39:50.201815 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:39:50.201826 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:39:50.201837 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:39:50.201889 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:39:50.201901 | orchestrator | 2026-02-16 03:39:50.201912 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-16 03:39:50.201923 | orchestrator | Monday 16 February 2026 03:39:42 +0000 (0:00:01.127) 0:03:16.356 ******* 2026-02-16 03:39:50.201942 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:39:50.201961 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:39:50.201988 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:39:50.202014 | orchestrator | 2026-02-16 03:39:50.202079 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-16 03:39:50.202100 | orchestrator | Monday 16 February 2026 03:39:42 +0000 (0:00:00.350) 0:03:16.706 ******* 2026-02-16 03:39:50.202119 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:39:50.202137 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:39:50.202153 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:39:50.202169 | orchestrator | 2026-02-16 03:39:50.202185 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-16 03:39:50.202202 | orchestrator | Monday 16 February 2026 03:39:43 +0000 (0:00:01.214) 0:03:17.921 ******* 2026-02-16 03:39:50.202219 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-16 03:39:50.202236 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-16 03:39:50.202253 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-16 03:39:50.202273 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:50.202294 | orchestrator | 2026-02-16 03:39:50.202315 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-16 03:39:50.202336 | orchestrator | Monday 16 February 2026 03:39:44 +0000 (0:00:01.071) 0:03:18.992 ******* 2026-02-16 03:39:50.202356 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:39:50.202375 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:39:50.202393 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:39:50.202412 | orchestrator | 2026-02-16 03:39:50.202431 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-16 03:39:50.202451 | orchestrator | 2026-02-16 03:39:50.202470 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-16 03:39:50.202492 | orchestrator | Monday 16 February 2026 03:39:45 +0000 (0:00:00.591) 0:03:19.584 ******* 2026-02-16 03:39:50.202511 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:39:50.202534 | orchestrator | 2026-02-16 03:39:50.202553 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-16 03:39:50.202572 | orchestrator | Monday 16 February 2026 03:39:45 +0000 (0:00:00.713) 0:03:20.298 ******* 2026-02-16 03:39:50.202584 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:39:50.202595 | orchestrator | 2026-02-16 03:39:50.202606 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-16 03:39:50.202617 | orchestrator | Monday 16 February 2026 03:39:46 +0000 (0:00:00.569) 0:03:20.868 ******* 2026-02-16 03:39:50.202628 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:39:50.202639 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:39:50.202650 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:39:50.202661 | orchestrator | 2026-02-16 03:39:50.202671 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-16 03:39:50.202682 | orchestrator | Monday 16 February 2026 03:39:47 +0000 (0:00:00.793) 0:03:21.661 ******* 2026-02-16 03:39:50.202693 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:50.202704 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:50.202715 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:50.202738 | orchestrator | 2026-02-16 03:39:50.202749 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-16 03:39:50.202759 | orchestrator | Monday 16 February 2026 03:39:47 +0000 (0:00:00.513) 0:03:22.175 ******* 2026-02-16 03:39:50.202770 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:50.202781 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:50.202792 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:50.202803 | orchestrator | 2026-02-16 03:39:50.202814 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-16 03:39:50.202824 | orchestrator | Monday 16 February 2026 03:39:48 +0000 (0:00:00.413) 0:03:22.588 ******* 2026-02-16 03:39:50.202835 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:50.202882 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:50.202902 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:50.202920 | orchestrator | 2026-02-16 03:39:50.202938 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-16 03:39:50.202956 | orchestrator | Monday 16 February 2026 03:39:48 +0000 (0:00:00.322) 0:03:22.911 ******* 2026-02-16 03:39:50.202971 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:39:50.202987 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:39:50.203005 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:39:50.203023 | orchestrator | 2026-02-16 03:39:50.203053 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-16 03:39:50.203065 | orchestrator | Monday 16 February 2026 03:39:49 +0000 (0:00:00.723) 0:03:23.635 ******* 2026-02-16 03:39:50.203076 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:50.203087 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:50.203098 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:39:50.203108 | orchestrator | 2026-02-16 03:39:50.203119 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-16 03:39:50.203131 | orchestrator | Monday 16 February 2026 03:39:49 +0000 (0:00:00.525) 0:03:24.161 ******* 2026-02-16 03:39:50.203141 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:39:50.203152 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:39:50.203178 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:40:11.559562 | orchestrator | 2026-02-16 03:40:11.559741 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-16 03:40:11.559774 | orchestrator | Monday 16 February 2026 03:39:50 +0000 (0:00:00.347) 0:03:24.509 ******* 2026-02-16 03:40:11.559794 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:40:11.559813 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:40:11.559831 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:40:11.559850 | orchestrator | 2026-02-16 03:40:11.559904 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-16 03:40:11.559927 | orchestrator | Monday 16 February 2026 03:39:50 +0000 (0:00:00.761) 0:03:25.270 ******* 2026-02-16 03:40:11.559946 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:40:11.559966 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:40:11.559985 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:40:11.560007 | orchestrator | 2026-02-16 03:40:11.560029 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-16 03:40:11.560051 | orchestrator | Monday 16 February 2026 03:39:51 +0000 (0:00:00.719) 0:03:25.990 ******* 2026-02-16 03:40:11.560072 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:40:11.560094 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:40:11.560114 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:40:11.560133 | orchestrator | 2026-02-16 03:40:11.560151 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-16 03:40:11.560170 | orchestrator | Monday 16 February 2026 03:39:52 +0000 (0:00:00.607) 0:03:26.597 ******* 2026-02-16 03:40:11.560189 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:40:11.560208 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:40:11.560227 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:40:11.560247 | orchestrator | 2026-02-16 03:40:11.560266 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-16 03:40:11.560323 | orchestrator | Monday 16 February 2026 03:39:52 +0000 (0:00:00.388) 0:03:26.985 ******* 2026-02-16 03:40:11.560343 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:40:11.560361 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:40:11.560379 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:40:11.560397 | orchestrator | 2026-02-16 03:40:11.560416 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-16 03:40:11.560435 | orchestrator | Monday 16 February 2026 03:39:53 +0000 (0:00:00.350) 0:03:27.336 ******* 2026-02-16 03:40:11.560454 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:40:11.560473 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:40:11.560492 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:40:11.560510 | orchestrator | 2026-02-16 03:40:11.560529 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-16 03:40:11.560548 | orchestrator | Monday 16 February 2026 03:39:53 +0000 (0:00:00.347) 0:03:27.683 ******* 2026-02-16 03:40:11.560566 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:40:11.560584 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:40:11.560602 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:40:11.560623 | orchestrator | 2026-02-16 03:40:11.560641 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-16 03:40:11.560660 | orchestrator | Monday 16 February 2026 03:39:53 +0000 (0:00:00.603) 0:03:28.287 ******* 2026-02-16 03:40:11.560679 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:40:11.560696 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:40:11.560714 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:40:11.560733 | orchestrator | 2026-02-16 03:40:11.560747 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-16 03:40:11.560758 | orchestrator | Monday 16 February 2026 03:39:54 +0000 (0:00:00.313) 0:03:28.600 ******* 2026-02-16 03:40:11.560769 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:40:11.560780 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:40:11.560790 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:40:11.560801 | orchestrator | 2026-02-16 03:40:11.560812 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-16 03:40:11.560823 | orchestrator | Monday 16 February 2026 03:39:54 +0000 (0:00:00.303) 0:03:28.903 ******* 2026-02-16 03:40:11.560834 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:40:11.560845 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:40:11.560856 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:40:11.560867 | orchestrator | 2026-02-16 03:40:11.560901 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-16 03:40:11.560913 | orchestrator | Monday 16 February 2026 03:39:54 +0000 (0:00:00.330) 0:03:29.234 ******* 2026-02-16 03:40:11.560923 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:40:11.560934 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:40:11.560945 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:40:11.560956 | orchestrator | 2026-02-16 03:40:11.560967 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-16 03:40:11.560978 | orchestrator | Monday 16 February 2026 03:39:55 +0000 (0:00:00.552) 0:03:29.786 ******* 2026-02-16 03:40:11.560989 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:40:11.561000 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:40:11.561010 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:40:11.561021 | orchestrator | 2026-02-16 03:40:11.561032 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-16 03:40:11.561043 | orchestrator | Monday 16 February 2026 03:39:55 +0000 (0:00:00.534) 0:03:30.320 ******* 2026-02-16 03:40:11.561054 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:40:11.561064 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:40:11.561076 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:40:11.561087 | orchestrator | 2026-02-16 03:40:11.561116 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-16 03:40:11.561127 | orchestrator | Monday 16 February 2026 03:39:56 +0000 (0:00:00.327) 0:03:30.647 ******* 2026-02-16 03:40:11.561151 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:40:11.561162 | orchestrator | 2026-02-16 03:40:11.561173 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-16 03:40:11.561184 | orchestrator | Monday 16 February 2026 03:39:57 +0000 (0:00:00.810) 0:03:31.457 ******* 2026-02-16 03:40:11.561195 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:40:11.561206 | orchestrator | 2026-02-16 03:40:11.561217 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-16 03:40:11.561252 | orchestrator | Monday 16 February 2026 03:39:57 +0000 (0:00:00.150) 0:03:31.608 ******* 2026-02-16 03:40:11.561263 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-16 03:40:11.561274 | orchestrator | 2026-02-16 03:40:11.561285 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-16 03:40:11.561296 | orchestrator | Monday 16 February 2026 03:39:58 +0000 (0:00:00.952) 0:03:32.561 ******* 2026-02-16 03:40:11.561306 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:40:11.561317 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:40:11.561328 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:40:11.561338 | orchestrator | 2026-02-16 03:40:11.561349 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-16 03:40:11.561360 | orchestrator | Monday 16 February 2026 03:39:58 +0000 (0:00:00.308) 0:03:32.870 ******* 2026-02-16 03:40:11.561371 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:40:11.561381 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:40:11.561392 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:40:11.561403 | orchestrator | 2026-02-16 03:40:11.561413 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-16 03:40:11.561424 | orchestrator | Monday 16 February 2026 03:39:59 +0000 (0:00:00.560) 0:03:33.431 ******* 2026-02-16 03:40:11.561435 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:40:11.561446 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:40:11.561457 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:40:11.561467 | orchestrator | 2026-02-16 03:40:11.561478 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-16 03:40:11.561489 | orchestrator | Monday 16 February 2026 03:40:00 +0000 (0:00:01.237) 0:03:34.668 ******* 2026-02-16 03:40:11.561500 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:40:11.561511 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:40:11.561522 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:40:11.561532 | orchestrator | 2026-02-16 03:40:11.561543 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-16 03:40:11.561554 | orchestrator | Monday 16 February 2026 03:40:01 +0000 (0:00:00.784) 0:03:35.453 ******* 2026-02-16 03:40:11.561565 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:40:11.561575 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:40:11.561586 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:40:11.561597 | orchestrator | 2026-02-16 03:40:11.561608 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-16 03:40:11.561618 | orchestrator | Monday 16 February 2026 03:40:01 +0000 (0:00:00.666) 0:03:36.119 ******* 2026-02-16 03:40:11.561629 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:40:11.561640 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:40:11.561650 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:40:11.561661 | orchestrator | 2026-02-16 03:40:11.561672 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-16 03:40:11.561683 | orchestrator | Monday 16 February 2026 03:40:02 +0000 (0:00:00.950) 0:03:37.069 ******* 2026-02-16 03:40:11.561693 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:40:11.561704 | orchestrator | 2026-02-16 03:40:11.561715 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-16 03:40:11.561726 | orchestrator | Monday 16 February 2026 03:40:03 +0000 (0:00:01.229) 0:03:38.299 ******* 2026-02-16 03:40:11.561737 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:40:11.561754 | orchestrator | 2026-02-16 03:40:11.561765 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-16 03:40:11.561776 | orchestrator | Monday 16 February 2026 03:40:04 +0000 (0:00:00.710) 0:03:39.010 ******* 2026-02-16 03:40:11.561787 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-16 03:40:11.561798 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:40:11.561808 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:40:11.561819 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-16 03:40:11.561830 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-16 03:40:11.561841 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-16 03:40:11.561852 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-16 03:40:11.561862 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-16 03:40:11.561890 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-16 03:40:11.561901 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-16 03:40:11.561912 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-16 03:40:11.561923 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-16 03:40:11.561933 | orchestrator | 2026-02-16 03:40:11.561944 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-16 03:40:11.561955 | orchestrator | Monday 16 February 2026 03:40:07 +0000 (0:00:03.271) 0:03:42.281 ******* 2026-02-16 03:40:11.561965 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:40:11.561976 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:40:11.561987 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:40:11.561998 | orchestrator | 2026-02-16 03:40:11.562008 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-16 03:40:11.562127 | orchestrator | Monday 16 February 2026 03:40:09 +0000 (0:00:01.226) 0:03:43.508 ******* 2026-02-16 03:40:11.562140 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:40:11.562151 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:40:11.562162 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:40:11.562173 | orchestrator | 2026-02-16 03:40:11.562191 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-16 03:40:11.562202 | orchestrator | Monday 16 February 2026 03:40:09 +0000 (0:00:00.599) 0:03:44.107 ******* 2026-02-16 03:40:11.562213 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:40:11.562224 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:40:11.562235 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:40:11.562245 | orchestrator | 2026-02-16 03:40:11.562256 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-16 03:40:11.562267 | orchestrator | Monday 16 February 2026 03:40:10 +0000 (0:00:00.326) 0:03:44.434 ******* 2026-02-16 03:40:11.562278 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:40:11.562289 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:40:11.562300 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:40:11.562310 | orchestrator | 2026-02-16 03:40:11.562332 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-16 03:41:12.791732 | orchestrator | Monday 16 February 2026 03:40:11 +0000 (0:00:01.429) 0:03:45.864 ******* 2026-02-16 03:41:12.791846 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:41:12.791860 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:41:12.791867 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:41:12.791875 | orchestrator | 2026-02-16 03:41:12.791883 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-16 03:41:12.791891 | orchestrator | Monday 16 February 2026 03:40:12 +0000 (0:00:01.300) 0:03:47.164 ******* 2026-02-16 03:41:12.791898 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:41:12.791905 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:41:12.791912 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:41:12.791930 | orchestrator | 2026-02-16 03:41:12.791938 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-16 03:41:12.792011 | orchestrator | Monday 16 February 2026 03:40:13 +0000 (0:00:00.550) 0:03:47.714 ******* 2026-02-16 03:41:12.792021 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:41:12.792029 | orchestrator | 2026-02-16 03:41:12.792037 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-16 03:41:12.792044 | orchestrator | Monday 16 February 2026 03:40:13 +0000 (0:00:00.550) 0:03:48.265 ******* 2026-02-16 03:41:12.792050 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:41:12.792058 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:41:12.792065 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:41:12.792072 | orchestrator | 2026-02-16 03:41:12.792079 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-16 03:41:12.792086 | orchestrator | Monday 16 February 2026 03:40:14 +0000 (0:00:00.297) 0:03:48.563 ******* 2026-02-16 03:41:12.792092 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:41:12.792099 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:41:12.792106 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:41:12.792113 | orchestrator | 2026-02-16 03:41:12.792120 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-16 03:41:12.792127 | orchestrator | Monday 16 February 2026 03:40:14 +0000 (0:00:00.535) 0:03:49.098 ******* 2026-02-16 03:41:12.792134 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:41:12.792142 | orchestrator | 2026-02-16 03:41:12.792149 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-16 03:41:12.792156 | orchestrator | Monday 16 February 2026 03:40:15 +0000 (0:00:00.538) 0:03:49.637 ******* 2026-02-16 03:41:12.792162 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:41:12.792169 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:41:12.792176 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:41:12.792182 | orchestrator | 2026-02-16 03:41:12.792189 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-16 03:41:12.792196 | orchestrator | Monday 16 February 2026 03:40:17 +0000 (0:00:01.790) 0:03:51.428 ******* 2026-02-16 03:41:12.792203 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:41:12.792209 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:41:12.792216 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:41:12.792222 | orchestrator | 2026-02-16 03:41:12.792229 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-16 03:41:12.792235 | orchestrator | Monday 16 February 2026 03:40:18 +0000 (0:00:01.387) 0:03:52.816 ******* 2026-02-16 03:41:12.792242 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:41:12.792248 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:41:12.792254 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:41:12.792260 | orchestrator | 2026-02-16 03:41:12.792266 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-16 03:41:12.792272 | orchestrator | Monday 16 February 2026 03:40:20 +0000 (0:00:01.764) 0:03:54.580 ******* 2026-02-16 03:41:12.792279 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:41:12.792286 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:41:12.792292 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:41:12.792299 | orchestrator | 2026-02-16 03:41:12.792306 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-16 03:41:12.792312 | orchestrator | Monday 16 February 2026 03:40:22 +0000 (0:00:01.963) 0:03:56.543 ******* 2026-02-16 03:41:12.792319 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:41:12.792326 | orchestrator | 2026-02-16 03:41:12.792333 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-16 03:41:12.792339 | orchestrator | Monday 16 February 2026 03:40:23 +0000 (0:00:00.807) 0:03:57.351 ******* 2026-02-16 03:41:12.792345 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-16 03:41:12.792361 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:41:12.792368 | orchestrator | 2026-02-16 03:41:12.792375 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-16 03:41:12.792382 | orchestrator | Monday 16 February 2026 03:40:44 +0000 (0:00:21.964) 0:04:19.315 ******* 2026-02-16 03:41:12.792389 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:41:12.792409 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:41:12.792416 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:41:12.792423 | orchestrator | 2026-02-16 03:41:12.792431 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-16 03:41:12.792438 | orchestrator | Monday 16 February 2026 03:40:54 +0000 (0:00:09.220) 0:04:28.536 ******* 2026-02-16 03:41:12.792445 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:41:12.792452 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:41:12.792459 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:41:12.792466 | orchestrator | 2026-02-16 03:41:12.792473 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-16 03:41:12.792480 | orchestrator | Monday 16 February 2026 03:40:54 +0000 (0:00:00.311) 0:04:28.847 ******* 2026-02-16 03:41:12.792508 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__466459ff4bad80f9b58dedffe72991525340a112'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-16 03:41:12.792518 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__466459ff4bad80f9b58dedffe72991525340a112'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-16 03:41:12.792527 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__466459ff4bad80f9b58dedffe72991525340a112'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-16 03:41:12.792537 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__466459ff4bad80f9b58dedffe72991525340a112'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-16 03:41:12.792545 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__466459ff4bad80f9b58dedffe72991525340a112'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-16 03:41:12.792552 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__466459ff4bad80f9b58dedffe72991525340a112'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__466459ff4bad80f9b58dedffe72991525340a112'}])  2026-02-16 03:41:12.792560 | orchestrator | 2026-02-16 03:41:12.792567 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-16 03:41:12.792574 | orchestrator | Monday 16 February 2026 03:41:09 +0000 (0:00:14.771) 0:04:43.619 ******* 2026-02-16 03:41:12.792587 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:41:12.792593 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:41:12.792600 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:41:12.792607 | orchestrator | 2026-02-16 03:41:12.792614 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-16 03:41:12.792621 | orchestrator | Monday 16 February 2026 03:41:09 +0000 (0:00:00.336) 0:04:43.956 ******* 2026-02-16 03:41:12.792629 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:41:12.792636 | orchestrator | 2026-02-16 03:41:12.792643 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-16 03:41:12.792650 | orchestrator | Monday 16 February 2026 03:41:10 +0000 (0:00:00.779) 0:04:44.736 ******* 2026-02-16 03:41:12.792656 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:41:12.792663 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:41:12.792670 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:41:12.792678 | orchestrator | 2026-02-16 03:41:12.792685 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-16 03:41:12.792691 | orchestrator | Monday 16 February 2026 03:41:10 +0000 (0:00:00.347) 0:04:45.084 ******* 2026-02-16 03:41:12.792698 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:41:12.792704 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:41:12.792711 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:41:12.792718 | orchestrator | 2026-02-16 03:41:12.792725 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-16 03:41:12.792733 | orchestrator | Monday 16 February 2026 03:41:11 +0000 (0:00:00.338) 0:04:45.423 ******* 2026-02-16 03:41:12.792739 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-16 03:41:12.792747 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-16 03:41:12.792753 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-16 03:41:12.792761 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:41:12.792767 | orchestrator | 2026-02-16 03:41:12.792774 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-16 03:41:12.792781 | orchestrator | Monday 16 February 2026 03:41:11 +0000 (0:00:00.843) 0:04:46.266 ******* 2026-02-16 03:41:12.792789 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:41:12.792796 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:41:12.792803 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:41:12.792809 | orchestrator | 2026-02-16 03:41:12.792816 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-16 03:41:12.792823 | orchestrator | 2026-02-16 03:41:12.792836 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-16 03:41:39.097123 | orchestrator | Monday 16 February 2026 03:41:12 +0000 (0:00:00.829) 0:04:47.095 ******* 2026-02-16 03:41:39.097270 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:41:39.097297 | orchestrator | 2026-02-16 03:41:39.097311 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-16 03:41:39.097322 | orchestrator | Monday 16 February 2026 03:41:13 +0000 (0:00:00.512) 0:04:47.608 ******* 2026-02-16 03:41:39.097341 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:41:39.097359 | orchestrator | 2026-02-16 03:41:39.097379 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-16 03:41:39.097397 | orchestrator | Monday 16 February 2026 03:41:14 +0000 (0:00:00.724) 0:04:48.332 ******* 2026-02-16 03:41:39.097416 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:41:39.097436 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:41:39.097455 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:41:39.097473 | orchestrator | 2026-02-16 03:41:39.097492 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-16 03:41:39.097510 | orchestrator | Monday 16 February 2026 03:41:14 +0000 (0:00:00.726) 0:04:49.058 ******* 2026-02-16 03:41:39.097560 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:41:39.097582 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:41:39.097595 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:41:39.097607 | orchestrator | 2026-02-16 03:41:39.097620 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-16 03:41:39.097633 | orchestrator | Monday 16 February 2026 03:41:15 +0000 (0:00:00.325) 0:04:49.384 ******* 2026-02-16 03:41:39.097645 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:41:39.097658 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:41:39.097671 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:41:39.097683 | orchestrator | 2026-02-16 03:41:39.097695 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-16 03:41:39.097707 | orchestrator | Monday 16 February 2026 03:41:15 +0000 (0:00:00.509) 0:04:49.894 ******* 2026-02-16 03:41:39.097719 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:41:39.097731 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:41:39.097744 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:41:39.097756 | orchestrator | 2026-02-16 03:41:39.097769 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-16 03:41:39.097782 | orchestrator | Monday 16 February 2026 03:41:15 +0000 (0:00:00.307) 0:04:50.201 ******* 2026-02-16 03:41:39.097795 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:41:39.097807 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:41:39.097820 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:41:39.097833 | orchestrator | 2026-02-16 03:41:39.097846 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-16 03:41:39.097858 | orchestrator | Monday 16 February 2026 03:41:16 +0000 (0:00:00.726) 0:04:50.928 ******* 2026-02-16 03:41:39.097871 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:41:39.097883 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:41:39.097894 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:41:39.097904 | orchestrator | 2026-02-16 03:41:39.097915 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-16 03:41:39.097926 | orchestrator | Monday 16 February 2026 03:41:16 +0000 (0:00:00.295) 0:04:51.224 ******* 2026-02-16 03:41:39.097936 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:41:39.097947 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:41:39.097958 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:41:39.097968 | orchestrator | 2026-02-16 03:41:39.097979 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-16 03:41:39.098082 | orchestrator | Monday 16 February 2026 03:41:17 +0000 (0:00:00.516) 0:04:51.741 ******* 2026-02-16 03:41:39.098100 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:41:39.098111 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:41:39.098121 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:41:39.098132 | orchestrator | 2026-02-16 03:41:39.098201 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-16 03:41:39.098214 | orchestrator | Monday 16 February 2026 03:41:18 +0000 (0:00:00.869) 0:04:52.610 ******* 2026-02-16 03:41:39.098225 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:41:39.098236 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:41:39.098246 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:41:39.098257 | orchestrator | 2026-02-16 03:41:39.098268 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-16 03:41:39.098279 | orchestrator | Monday 16 February 2026 03:41:19 +0000 (0:00:00.804) 0:04:53.414 ******* 2026-02-16 03:41:39.098290 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:41:39.098301 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:41:39.098312 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:41:39.098322 | orchestrator | 2026-02-16 03:41:39.098333 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-16 03:41:39.098344 | orchestrator | Monday 16 February 2026 03:41:19 +0000 (0:00:00.305) 0:04:53.720 ******* 2026-02-16 03:41:39.098355 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:41:39.098377 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:41:39.098393 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:41:39.098404 | orchestrator | 2026-02-16 03:41:39.098415 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-16 03:41:39.098426 | orchestrator | Monday 16 February 2026 03:41:19 +0000 (0:00:00.595) 0:04:54.316 ******* 2026-02-16 03:41:39.098437 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:41:39.098447 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:41:39.098458 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:41:39.098469 | orchestrator | 2026-02-16 03:41:39.098479 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-16 03:41:39.098490 | orchestrator | Monday 16 February 2026 03:41:20 +0000 (0:00:00.327) 0:04:54.644 ******* 2026-02-16 03:41:39.098501 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:41:39.098512 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:41:39.098522 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:41:39.098533 | orchestrator | 2026-02-16 03:41:39.098569 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-16 03:41:39.098590 | orchestrator | Monday 16 February 2026 03:41:20 +0000 (0:00:00.319) 0:04:54.964 ******* 2026-02-16 03:41:39.098608 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:41:39.098626 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:41:39.098644 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:41:39.098661 | orchestrator | 2026-02-16 03:41:39.098679 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-16 03:41:39.098698 | orchestrator | Monday 16 February 2026 03:41:20 +0000 (0:00:00.312) 0:04:55.276 ******* 2026-02-16 03:41:39.098718 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:41:39.098737 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:41:39.098756 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:41:39.098769 | orchestrator | 2026-02-16 03:41:39.098779 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-16 03:41:39.098790 | orchestrator | Monday 16 February 2026 03:41:21 +0000 (0:00:00.589) 0:04:55.866 ******* 2026-02-16 03:41:39.098801 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:41:39.098811 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:41:39.098822 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:41:39.098833 | orchestrator | 2026-02-16 03:41:39.098843 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-16 03:41:39.098854 | orchestrator | Monday 16 February 2026 03:41:21 +0000 (0:00:00.325) 0:04:56.191 ******* 2026-02-16 03:41:39.098873 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:41:39.098891 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:41:39.098909 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:41:39.098928 | orchestrator | 2026-02-16 03:41:39.098946 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-16 03:41:39.098964 | orchestrator | Monday 16 February 2026 03:41:22 +0000 (0:00:00.328) 0:04:56.519 ******* 2026-02-16 03:41:39.098984 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:41:39.099030 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:41:39.099042 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:41:39.099053 | orchestrator | 2026-02-16 03:41:39.099064 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-16 03:41:39.099075 | orchestrator | Monday 16 February 2026 03:41:22 +0000 (0:00:00.320) 0:04:56.839 ******* 2026-02-16 03:41:39.099085 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:41:39.099096 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:41:39.099107 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:41:39.099118 | orchestrator | 2026-02-16 03:41:39.099129 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-16 03:41:39.099140 | orchestrator | Monday 16 February 2026 03:41:23 +0000 (0:00:00.789) 0:04:57.629 ******* 2026-02-16 03:41:39.099151 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 03:41:39.099162 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 03:41:39.099185 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 03:41:39.099195 | orchestrator | 2026-02-16 03:41:39.099206 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-16 03:41:39.099217 | orchestrator | Monday 16 February 2026 03:41:23 +0000 (0:00:00.632) 0:04:58.261 ******* 2026-02-16 03:41:39.099228 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:41:39.099239 | orchestrator | 2026-02-16 03:41:39.099250 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-16 03:41:39.099261 | orchestrator | Monday 16 February 2026 03:41:24 +0000 (0:00:00.736) 0:04:58.998 ******* 2026-02-16 03:41:39.099272 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:41:39.099283 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:41:39.099293 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:41:39.099304 | orchestrator | 2026-02-16 03:41:39.099315 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-16 03:41:39.099326 | orchestrator | Monday 16 February 2026 03:41:25 +0000 (0:00:00.718) 0:04:59.716 ******* 2026-02-16 03:41:39.099337 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:41:39.099347 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:41:39.099358 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:41:39.099369 | orchestrator | 2026-02-16 03:41:39.099380 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-16 03:41:39.099390 | orchestrator | Monday 16 February 2026 03:41:25 +0000 (0:00:00.310) 0:05:00.027 ******* 2026-02-16 03:41:39.099401 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-16 03:41:39.099412 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-16 03:41:39.099423 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-16 03:41:39.099434 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-16 03:41:39.099445 | orchestrator | 2026-02-16 03:41:39.099456 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-16 03:41:39.099467 | orchestrator | Monday 16 February 2026 03:41:36 +0000 (0:00:10.567) 0:05:10.595 ******* 2026-02-16 03:41:39.099478 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:41:39.099489 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:41:39.099500 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:41:39.099510 | orchestrator | 2026-02-16 03:41:39.099527 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-16 03:41:39.099539 | orchestrator | Monday 16 February 2026 03:41:36 +0000 (0:00:00.329) 0:05:10.924 ******* 2026-02-16 03:41:39.099550 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-16 03:41:39.099560 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-16 03:41:39.099571 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-16 03:41:39.099582 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-16 03:41:39.099593 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:41:39.099604 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:41:39.099614 | orchestrator | 2026-02-16 03:41:39.099625 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-16 03:41:39.099645 | orchestrator | Monday 16 February 2026 03:41:39 +0000 (0:00:02.471) 0:05:13.395 ******* 2026-02-16 03:42:34.717998 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-16 03:42:34.718202 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-16 03:42:34.718228 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-16 03:42:34.718249 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-16 03:42:34.718270 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-16 03:42:34.718290 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-16 03:42:34.718310 | orchestrator | 2026-02-16 03:42:34.718331 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-16 03:42:34.718381 | orchestrator | Monday 16 February 2026 03:41:40 +0000 (0:00:01.267) 0:05:14.663 ******* 2026-02-16 03:42:34.718401 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:42:34.718421 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:42:34.718440 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:42:34.718459 | orchestrator | 2026-02-16 03:42:34.718479 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-16 03:42:34.718498 | orchestrator | Monday 16 February 2026 03:41:41 +0000 (0:00:00.740) 0:05:15.403 ******* 2026-02-16 03:42:34.718517 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:42:34.718538 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:42:34.718559 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:42:34.718579 | orchestrator | 2026-02-16 03:42:34.718601 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-16 03:42:34.718622 | orchestrator | Monday 16 February 2026 03:41:41 +0000 (0:00:00.295) 0:05:15.698 ******* 2026-02-16 03:42:34.718642 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:42:34.718663 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:42:34.718682 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:42:34.718702 | orchestrator | 2026-02-16 03:42:34.718722 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-16 03:42:34.718742 | orchestrator | Monday 16 February 2026 03:41:41 +0000 (0:00:00.554) 0:05:16.253 ******* 2026-02-16 03:42:34.718763 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:42:34.718785 | orchestrator | 2026-02-16 03:42:34.718808 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-16 03:42:34.718828 | orchestrator | Monday 16 February 2026 03:41:42 +0000 (0:00:00.541) 0:05:16.794 ******* 2026-02-16 03:42:34.718848 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:42:34.718869 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:42:34.718889 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:42:34.718907 | orchestrator | 2026-02-16 03:42:34.718927 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-16 03:42:34.718947 | orchestrator | Monday 16 February 2026 03:41:42 +0000 (0:00:00.329) 0:05:17.124 ******* 2026-02-16 03:42:34.718966 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:42:34.718986 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:42:34.719005 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:42:34.719024 | orchestrator | 2026-02-16 03:42:34.719044 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-16 03:42:34.719064 | orchestrator | Monday 16 February 2026 03:41:43 +0000 (0:00:00.580) 0:05:17.704 ******* 2026-02-16 03:42:34.719106 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:42:34.719125 | orchestrator | 2026-02-16 03:42:34.719144 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-16 03:42:34.719162 | orchestrator | Monday 16 February 2026 03:41:43 +0000 (0:00:00.532) 0:05:18.237 ******* 2026-02-16 03:42:34.719181 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:42:34.719199 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:42:34.719218 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:42:34.719236 | orchestrator | 2026-02-16 03:42:34.719254 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-16 03:42:34.719273 | orchestrator | Monday 16 February 2026 03:41:45 +0000 (0:00:01.254) 0:05:19.491 ******* 2026-02-16 03:42:34.719291 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:42:34.719309 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:42:34.719328 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:42:34.719346 | orchestrator | 2026-02-16 03:42:34.719365 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-16 03:42:34.719383 | orchestrator | Monday 16 February 2026 03:41:46 +0000 (0:00:01.449) 0:05:20.941 ******* 2026-02-16 03:42:34.719424 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:42:34.719442 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:42:34.719460 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:42:34.719478 | orchestrator | 2026-02-16 03:42:34.719498 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-16 03:42:34.719518 | orchestrator | Monday 16 February 2026 03:41:48 +0000 (0:00:01.817) 0:05:22.759 ******* 2026-02-16 03:42:34.719536 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:42:34.719555 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:42:34.719574 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:42:34.719592 | orchestrator | 2026-02-16 03:42:34.719611 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-16 03:42:34.719645 | orchestrator | Monday 16 February 2026 03:41:50 +0000 (0:00:01.959) 0:05:24.718 ******* 2026-02-16 03:42:34.719664 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:42:34.719683 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:42:34.719702 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-16 03:42:34.719720 | orchestrator | 2026-02-16 03:42:34.719739 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-16 03:42:34.719758 | orchestrator | Monday 16 February 2026 03:41:51 +0000 (0:00:00.692) 0:05:25.411 ******* 2026-02-16 03:42:34.719777 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-16 03:42:34.719796 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-16 03:42:34.719837 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-16 03:42:34.719857 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-16 03:42:34.719876 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-16 03:42:34.719896 | orchestrator | 2026-02-16 03:42:34.719914 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-16 03:42:34.719933 | orchestrator | Monday 16 February 2026 03:42:15 +0000 (0:00:24.290) 0:05:49.701 ******* 2026-02-16 03:42:34.719952 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-16 03:42:34.719971 | orchestrator | 2026-02-16 03:42:34.719990 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-16 03:42:34.720009 | orchestrator | Monday 16 February 2026 03:42:16 +0000 (0:00:01.359) 0:05:51.061 ******* 2026-02-16 03:42:34.720027 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:42:34.720046 | orchestrator | 2026-02-16 03:42:34.720105 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-16 03:42:34.720127 | orchestrator | Monday 16 February 2026 03:42:17 +0000 (0:00:00.317) 0:05:51.378 ******* 2026-02-16 03:42:34.720146 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:42:34.720165 | orchestrator | 2026-02-16 03:42:34.720184 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-16 03:42:34.720204 | orchestrator | Monday 16 February 2026 03:42:17 +0000 (0:00:00.153) 0:05:51.532 ******* 2026-02-16 03:42:34.720222 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-16 03:42:34.720241 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-16 03:42:34.720259 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-16 03:42:34.720278 | orchestrator | 2026-02-16 03:42:34.720297 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-16 03:42:34.720316 | orchestrator | Monday 16 February 2026 03:42:23 +0000 (0:00:06.617) 0:05:58.149 ******* 2026-02-16 03:42:34.720335 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-16 03:42:34.720353 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-16 03:42:34.720384 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-16 03:42:34.720402 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-16 03:42:34.720421 | orchestrator | 2026-02-16 03:42:34.720440 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-16 03:42:34.720459 | orchestrator | Monday 16 February 2026 03:42:28 +0000 (0:00:05.094) 0:06:03.244 ******* 2026-02-16 03:42:34.720478 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:42:34.720497 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:42:34.720515 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:42:34.720534 | orchestrator | 2026-02-16 03:42:34.720553 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-16 03:42:34.720571 | orchestrator | Monday 16 February 2026 03:42:29 +0000 (0:00:00.713) 0:06:03.958 ******* 2026-02-16 03:42:34.720590 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:42:34.720609 | orchestrator | 2026-02-16 03:42:34.720627 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-16 03:42:34.720646 | orchestrator | Monday 16 February 2026 03:42:30 +0000 (0:00:00.531) 0:06:04.489 ******* 2026-02-16 03:42:34.720665 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:42:34.720683 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:42:34.720702 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:42:34.720721 | orchestrator | 2026-02-16 03:42:34.720740 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-16 03:42:34.720759 | orchestrator | Monday 16 February 2026 03:42:30 +0000 (0:00:00.595) 0:06:05.085 ******* 2026-02-16 03:42:34.720777 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:42:34.720796 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:42:34.720814 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:42:34.720832 | orchestrator | 2026-02-16 03:42:34.720850 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-16 03:42:34.720868 | orchestrator | Monday 16 February 2026 03:42:31 +0000 (0:00:01.180) 0:06:06.265 ******* 2026-02-16 03:42:34.720885 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-16 03:42:34.720904 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-16 03:42:34.720923 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-16 03:42:34.720942 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:42:34.720960 | orchestrator | 2026-02-16 03:42:34.720979 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-16 03:42:34.720998 | orchestrator | Monday 16 February 2026 03:42:32 +0000 (0:00:00.637) 0:06:06.902 ******* 2026-02-16 03:42:34.721017 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:42:34.721043 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:42:34.721062 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:42:34.721138 | orchestrator | 2026-02-16 03:42:34.721157 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-16 03:42:34.721169 | orchestrator | 2026-02-16 03:42:34.721180 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-16 03:42:34.721191 | orchestrator | Monday 16 February 2026 03:42:33 +0000 (0:00:00.820) 0:06:07.723 ******* 2026-02-16 03:42:34.721202 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:42:34.721214 | orchestrator | 2026-02-16 03:42:34.721225 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-16 03:42:34.721236 | orchestrator | Monday 16 February 2026 03:42:33 +0000 (0:00:00.557) 0:06:08.281 ******* 2026-02-16 03:42:34.721256 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:42:49.754738 | orchestrator | 2026-02-16 03:42:49.754905 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-16 03:42:49.754943 | orchestrator | Monday 16 February 2026 03:42:34 +0000 (0:00:00.740) 0:06:09.021 ******* 2026-02-16 03:42:49.754989 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:42:49.755010 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:42:49.755028 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:42:49.755046 | orchestrator | 2026-02-16 03:42:49.755066 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-16 03:42:49.755110 | orchestrator | Monday 16 February 2026 03:42:35 +0000 (0:00:00.319) 0:06:09.341 ******* 2026-02-16 03:42:49.755132 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:42:49.755151 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:42:49.755162 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:42:49.755173 | orchestrator | 2026-02-16 03:42:49.755184 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-16 03:42:49.755195 | orchestrator | Monday 16 February 2026 03:42:35 +0000 (0:00:00.698) 0:06:10.039 ******* 2026-02-16 03:42:49.755206 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:42:49.755216 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:42:49.755227 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:42:49.755237 | orchestrator | 2026-02-16 03:42:49.755248 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-16 03:42:49.755259 | orchestrator | Monday 16 February 2026 03:42:36 +0000 (0:00:00.690) 0:06:10.730 ******* 2026-02-16 03:42:49.755273 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:42:49.755285 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:42:49.755298 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:42:49.755310 | orchestrator | 2026-02-16 03:42:49.755322 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-16 03:42:49.755334 | orchestrator | Monday 16 February 2026 03:42:37 +0000 (0:00:00.948) 0:06:11.678 ******* 2026-02-16 03:42:49.755347 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:42:49.755359 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:42:49.755372 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:42:49.755384 | orchestrator | 2026-02-16 03:42:49.755394 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-16 03:42:49.755405 | orchestrator | Monday 16 February 2026 03:42:37 +0000 (0:00:00.332) 0:06:12.011 ******* 2026-02-16 03:42:49.755416 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:42:49.755426 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:42:49.755437 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:42:49.755448 | orchestrator | 2026-02-16 03:42:49.755459 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-16 03:42:49.755469 | orchestrator | Monday 16 February 2026 03:42:37 +0000 (0:00:00.310) 0:06:12.322 ******* 2026-02-16 03:42:49.755480 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:42:49.755491 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:42:49.755501 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:42:49.755512 | orchestrator | 2026-02-16 03:42:49.755523 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-16 03:42:49.755533 | orchestrator | Monday 16 February 2026 03:42:38 +0000 (0:00:00.335) 0:06:12.657 ******* 2026-02-16 03:42:49.755544 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:42:49.755555 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:42:49.755565 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:42:49.755577 | orchestrator | 2026-02-16 03:42:49.755587 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-16 03:42:49.755598 | orchestrator | Monday 16 February 2026 03:42:39 +0000 (0:00:00.994) 0:06:13.652 ******* 2026-02-16 03:42:49.755609 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:42:49.755619 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:42:49.755630 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:42:49.755641 | orchestrator | 2026-02-16 03:42:49.755651 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-16 03:42:49.755662 | orchestrator | Monday 16 February 2026 03:42:40 +0000 (0:00:00.710) 0:06:14.362 ******* 2026-02-16 03:42:49.755673 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:42:49.755714 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:42:49.755725 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:42:49.755736 | orchestrator | 2026-02-16 03:42:49.755747 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-16 03:42:49.755758 | orchestrator | Monday 16 February 2026 03:42:40 +0000 (0:00:00.331) 0:06:14.694 ******* 2026-02-16 03:42:49.755769 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:42:49.755780 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:42:49.755791 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:42:49.755802 | orchestrator | 2026-02-16 03:42:49.755813 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-16 03:42:49.755823 | orchestrator | Monday 16 February 2026 03:42:40 +0000 (0:00:00.315) 0:06:15.010 ******* 2026-02-16 03:42:49.755834 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:42:49.755845 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:42:49.755856 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:42:49.755867 | orchestrator | 2026-02-16 03:42:49.755878 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-16 03:42:49.755902 | orchestrator | Monday 16 February 2026 03:42:41 +0000 (0:00:00.566) 0:06:15.576 ******* 2026-02-16 03:42:49.755913 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:42:49.755924 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:42:49.755935 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:42:49.755946 | orchestrator | 2026-02-16 03:42:49.755957 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-16 03:42:49.755968 | orchestrator | Monday 16 February 2026 03:42:41 +0000 (0:00:00.357) 0:06:15.933 ******* 2026-02-16 03:42:49.755979 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:42:49.755990 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:42:49.756001 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:42:49.756012 | orchestrator | 2026-02-16 03:42:49.756023 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-16 03:42:49.756033 | orchestrator | Monday 16 February 2026 03:42:41 +0000 (0:00:00.338) 0:06:16.272 ******* 2026-02-16 03:42:49.756044 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:42:49.756056 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:42:49.756067 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:42:49.756077 | orchestrator | 2026-02-16 03:42:49.756109 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-16 03:42:49.756140 | orchestrator | Monday 16 February 2026 03:42:42 +0000 (0:00:00.330) 0:06:16.603 ******* 2026-02-16 03:42:49.756152 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:42:49.756163 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:42:49.756173 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:42:49.756184 | orchestrator | 2026-02-16 03:42:49.756195 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-16 03:42:49.756206 | orchestrator | Monday 16 February 2026 03:42:42 +0000 (0:00:00.591) 0:06:17.195 ******* 2026-02-16 03:42:49.756216 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:42:49.756227 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:42:49.756238 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:42:49.756249 | orchestrator | 2026-02-16 03:42:49.756259 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-16 03:42:49.756270 | orchestrator | Monday 16 February 2026 03:42:43 +0000 (0:00:00.309) 0:06:17.504 ******* 2026-02-16 03:42:49.756281 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:42:49.756292 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:42:49.756302 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:42:49.756313 | orchestrator | 2026-02-16 03:42:49.756324 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-16 03:42:49.756335 | orchestrator | Monday 16 February 2026 03:42:43 +0000 (0:00:00.330) 0:06:17.834 ******* 2026-02-16 03:42:49.756345 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:42:49.756356 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:42:49.756367 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:42:49.756385 | orchestrator | 2026-02-16 03:42:49.756396 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-16 03:42:49.756407 | orchestrator | Monday 16 February 2026 03:42:44 +0000 (0:00:00.794) 0:06:18.628 ******* 2026-02-16 03:42:49.756417 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:42:49.756428 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:42:49.756439 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:42:49.756449 | orchestrator | 2026-02-16 03:42:49.756460 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-16 03:42:49.756471 | orchestrator | Monday 16 February 2026 03:42:44 +0000 (0:00:00.332) 0:06:18.961 ******* 2026-02-16 03:42:49.756482 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-16 03:42:49.756493 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 03:42:49.756504 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 03:42:49.756515 | orchestrator | 2026-02-16 03:42:49.756526 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-16 03:42:49.756537 | orchestrator | Monday 16 February 2026 03:42:45 +0000 (0:00:00.635) 0:06:19.596 ******* 2026-02-16 03:42:49.756547 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:42:49.756558 | orchestrator | 2026-02-16 03:42:49.756569 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-16 03:42:49.756580 | orchestrator | Monday 16 February 2026 03:42:45 +0000 (0:00:00.727) 0:06:20.323 ******* 2026-02-16 03:42:49.756591 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:42:49.756601 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:42:49.756612 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:42:49.756623 | orchestrator | 2026-02-16 03:42:49.756633 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-16 03:42:49.756644 | orchestrator | Monday 16 February 2026 03:42:46 +0000 (0:00:00.321) 0:06:20.645 ******* 2026-02-16 03:42:49.756655 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:42:49.756666 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:42:49.756676 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:42:49.756687 | orchestrator | 2026-02-16 03:42:49.756697 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-16 03:42:49.756708 | orchestrator | Monday 16 February 2026 03:42:46 +0000 (0:00:00.311) 0:06:20.956 ******* 2026-02-16 03:42:49.756719 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:42:49.756729 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:42:49.756740 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:42:49.756751 | orchestrator | 2026-02-16 03:42:49.756761 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-16 03:42:49.756772 | orchestrator | Monday 16 February 2026 03:42:47 +0000 (0:00:00.572) 0:06:21.528 ******* 2026-02-16 03:42:49.756783 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:42:49.756794 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:42:49.756804 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:42:49.756815 | orchestrator | 2026-02-16 03:42:49.756826 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-16 03:42:49.756836 | orchestrator | Monday 16 February 2026 03:42:47 +0000 (0:00:00.586) 0:06:22.115 ******* 2026-02-16 03:42:49.756847 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-16 03:42:49.756863 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-16 03:42:49.756874 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-16 03:42:49.756885 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-16 03:42:49.756900 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-16 03:42:49.756928 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-16 03:42:49.756947 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-16 03:42:49.756965 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-16 03:42:49.756984 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-16 03:42:49.757012 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-16 03:43:52.377122 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-16 03:43:52.378746 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-16 03:43:52.378764 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-16 03:43:52.378775 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-16 03:43:52.378785 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-16 03:43:52.378796 | orchestrator | 2026-02-16 03:43:52.378808 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-16 03:43:52.378818 | orchestrator | Monday 16 February 2026 03:42:49 +0000 (0:00:01.937) 0:06:24.053 ******* 2026-02-16 03:43:52.378829 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:43:52.378841 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:43:52.378851 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:43:52.378860 | orchestrator | 2026-02-16 03:43:52.378870 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-16 03:43:52.378880 | orchestrator | Monday 16 February 2026 03:42:50 +0000 (0:00:00.315) 0:06:24.368 ******* 2026-02-16 03:43:52.378890 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:43:52.378901 | orchestrator | 2026-02-16 03:43:52.378911 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-16 03:43:52.378921 | orchestrator | Monday 16 February 2026 03:42:50 +0000 (0:00:00.785) 0:06:25.154 ******* 2026-02-16 03:43:52.378931 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-16 03:43:52.378941 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-16 03:43:52.378951 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-16 03:43:52.378961 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-16 03:43:52.378972 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-16 03:43:52.378982 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-16 03:43:52.378992 | orchestrator | 2026-02-16 03:43:52.379002 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-16 03:43:52.379012 | orchestrator | Monday 16 February 2026 03:42:51 +0000 (0:00:01.030) 0:06:26.184 ******* 2026-02-16 03:43:52.379022 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:43:52.379032 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-16 03:43:52.379042 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-16 03:43:52.379052 | orchestrator | 2026-02-16 03:43:52.379062 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-16 03:43:52.379071 | orchestrator | Monday 16 February 2026 03:42:53 +0000 (0:00:01.968) 0:06:28.153 ******* 2026-02-16 03:43:52.379082 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-16 03:43:52.379091 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-16 03:43:52.379102 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:43:52.379112 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-16 03:43:52.379122 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-16 03:43:52.379155 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:43:52.379165 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-16 03:43:52.379215 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-16 03:43:52.379230 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:43:52.379240 | orchestrator | 2026-02-16 03:43:52.379250 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-16 03:43:52.379260 | orchestrator | Monday 16 February 2026 03:42:54 +0000 (0:00:01.040) 0:06:29.194 ******* 2026-02-16 03:43:52.379270 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-16 03:43:52.379280 | orchestrator | 2026-02-16 03:43:52.379290 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-16 03:43:52.379300 | orchestrator | Monday 16 February 2026 03:42:56 +0000 (0:00:02.038) 0:06:31.233 ******* 2026-02-16 03:43:52.379311 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:43:52.379322 | orchestrator | 2026-02-16 03:43:52.379331 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-16 03:43:52.379341 | orchestrator | Monday 16 February 2026 03:42:57 +0000 (0:00:00.702) 0:06:31.935 ******* 2026-02-16 03:43:52.379368 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'}) 2026-02-16 03:43:52.379379 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'}) 2026-02-16 03:43:52.379389 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'}) 2026-02-16 03:43:52.379400 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'}) 2026-02-16 03:43:52.379410 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'}) 2026-02-16 03:43:52.379444 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'}) 2026-02-16 03:43:52.379455 | orchestrator | 2026-02-16 03:43:52.379465 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-16 03:43:52.379475 | orchestrator | Monday 16 February 2026 03:43:36 +0000 (0:00:38.695) 0:07:10.631 ******* 2026-02-16 03:43:52.379485 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:43:52.379495 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:43:52.379505 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:43:52.379515 | orchestrator | 2026-02-16 03:43:52.379524 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-16 03:43:52.379534 | orchestrator | Monday 16 February 2026 03:43:36 +0000 (0:00:00.292) 0:07:10.923 ******* 2026-02-16 03:43:52.379544 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:43:52.379554 | orchestrator | 2026-02-16 03:43:52.379564 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-16 03:43:52.379574 | orchestrator | Monday 16 February 2026 03:43:37 +0000 (0:00:00.599) 0:07:11.522 ******* 2026-02-16 03:43:52.379584 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:43:52.379594 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:43:52.379604 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:43:52.379614 | orchestrator | 2026-02-16 03:43:52.379629 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-16 03:43:52.379644 | orchestrator | Monday 16 February 2026 03:43:37 +0000 (0:00:00.613) 0:07:12.136 ******* 2026-02-16 03:43:52.379658 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:43:52.379673 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:43:52.379697 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:43:52.379714 | orchestrator | 2026-02-16 03:43:52.379730 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-16 03:43:52.379745 | orchestrator | Monday 16 February 2026 03:43:40 +0000 (0:00:02.458) 0:07:14.595 ******* 2026-02-16 03:43:52.379759 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:43:52.379775 | orchestrator | 2026-02-16 03:43:52.379791 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-16 03:43:52.379806 | orchestrator | Monday 16 February 2026 03:43:40 +0000 (0:00:00.566) 0:07:15.161 ******* 2026-02-16 03:43:52.379821 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:43:52.379871 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:43:52.379882 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:43:52.379893 | orchestrator | 2026-02-16 03:43:52.379903 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-16 03:43:52.379913 | orchestrator | Monday 16 February 2026 03:43:41 +0000 (0:00:01.093) 0:07:16.254 ******* 2026-02-16 03:43:52.379922 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:43:52.379932 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:43:52.379942 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:43:52.379952 | orchestrator | 2026-02-16 03:43:52.379963 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-16 03:43:52.379973 | orchestrator | Monday 16 February 2026 03:43:43 +0000 (0:00:01.104) 0:07:17.358 ******* 2026-02-16 03:43:52.379982 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:43:52.379992 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:43:52.380001 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:43:52.380011 | orchestrator | 2026-02-16 03:43:52.380021 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-16 03:43:52.380031 | orchestrator | Monday 16 February 2026 03:43:44 +0000 (0:00:01.808) 0:07:19.167 ******* 2026-02-16 03:43:52.380041 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:43:52.380051 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:43:52.380061 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:43:52.380070 | orchestrator | 2026-02-16 03:43:52.380080 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-16 03:43:52.380090 | orchestrator | Monday 16 February 2026 03:43:45 +0000 (0:00:00.313) 0:07:19.480 ******* 2026-02-16 03:43:52.380098 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:43:52.380107 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:43:52.380115 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:43:52.380124 | orchestrator | 2026-02-16 03:43:52.380133 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-16 03:43:52.380141 | orchestrator | Monday 16 February 2026 03:43:45 +0000 (0:00:00.288) 0:07:19.769 ******* 2026-02-16 03:43:52.380150 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-16 03:43:52.380158 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-02-16 03:43:52.380167 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-02-16 03:43:52.380198 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-02-16 03:43:52.380207 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-16 03:43:52.380215 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-16 03:43:52.380224 | orchestrator | 2026-02-16 03:43:52.380239 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-16 03:43:52.380249 | orchestrator | Monday 16 February 2026 03:43:46 +0000 (0:00:00.974) 0:07:20.744 ******* 2026-02-16 03:43:52.380258 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-16 03:43:52.380267 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-02-16 03:43:52.380275 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-02-16 03:43:52.380283 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-02-16 03:43:52.380292 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-16 03:43:52.380301 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-16 03:43:52.380317 | orchestrator | 2026-02-16 03:43:52.380326 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-16 03:43:52.380334 | orchestrator | Monday 16 February 2026 03:43:48 +0000 (0:00:02.342) 0:07:23.087 ******* 2026-02-16 03:43:52.380343 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-16 03:43:52.380351 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-02-16 03:43:52.380360 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-02-16 03:43:52.380369 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-02-16 03:43:52.380387 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-16 03:44:23.609978 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-16 03:44:23.610129 | orchestrator | 2026-02-16 03:44:23.610145 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-16 03:44:23.610156 | orchestrator | Monday 16 February 2026 03:43:52 +0000 (0:00:03.592) 0:07:26.680 ******* 2026-02-16 03:44:23.610166 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.610176 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:44:23.610185 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-16 03:44:23.610195 | orchestrator | 2026-02-16 03:44:23.610205 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-16 03:44:23.610235 | orchestrator | Monday 16 February 2026 03:43:54 +0000 (0:00:02.212) 0:07:28.892 ******* 2026-02-16 03:44:23.610244 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.610253 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:44:23.610262 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-16 03:44:23.610272 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-16 03:44:23.610281 | orchestrator | 2026-02-16 03:44:23.610289 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-16 03:44:23.610298 | orchestrator | Monday 16 February 2026 03:44:07 +0000 (0:00:12.721) 0:07:41.614 ******* 2026-02-16 03:44:23.610307 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.610315 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:44:23.610324 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:44:23.610332 | orchestrator | 2026-02-16 03:44:23.610341 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-16 03:44:23.610375 | orchestrator | Monday 16 February 2026 03:44:08 +0000 (0:00:01.216) 0:07:42.831 ******* 2026-02-16 03:44:23.610384 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.610394 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:44:23.610403 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:44:23.610412 | orchestrator | 2026-02-16 03:44:23.610420 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-16 03:44:23.610429 | orchestrator | Monday 16 February 2026 03:44:08 +0000 (0:00:00.395) 0:07:43.226 ******* 2026-02-16 03:44:23.610439 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:44:23.610448 | orchestrator | 2026-02-16 03:44:23.610457 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-16 03:44:23.610465 | orchestrator | Monday 16 February 2026 03:44:09 +0000 (0:00:00.875) 0:07:44.101 ******* 2026-02-16 03:44:23.610474 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 03:44:23.610483 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 03:44:23.610492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 03:44:23.610500 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.610509 | orchestrator | 2026-02-16 03:44:23.610518 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-16 03:44:23.610528 | orchestrator | Monday 16 February 2026 03:44:10 +0000 (0:00:00.392) 0:07:44.494 ******* 2026-02-16 03:44:23.610538 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.610547 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:44:23.610589 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:44:23.610601 | orchestrator | 2026-02-16 03:44:23.610610 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-16 03:44:23.610620 | orchestrator | Monday 16 February 2026 03:44:10 +0000 (0:00:00.320) 0:07:44.814 ******* 2026-02-16 03:44:23.610629 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.610639 | orchestrator | 2026-02-16 03:44:23.610649 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-16 03:44:23.610658 | orchestrator | Monday 16 February 2026 03:44:10 +0000 (0:00:00.257) 0:07:45.072 ******* 2026-02-16 03:44:23.610668 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.610678 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:44:23.610687 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:44:23.610697 | orchestrator | 2026-02-16 03:44:23.610706 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-16 03:44:23.610716 | orchestrator | Monday 16 February 2026 03:44:11 +0000 (0:00:00.587) 0:07:45.660 ******* 2026-02-16 03:44:23.610726 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.610736 | orchestrator | 2026-02-16 03:44:23.610746 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-16 03:44:23.610756 | orchestrator | Monday 16 February 2026 03:44:11 +0000 (0:00:00.249) 0:07:45.909 ******* 2026-02-16 03:44:23.610765 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.610775 | orchestrator | 2026-02-16 03:44:23.610785 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-16 03:44:23.610807 | orchestrator | Monday 16 February 2026 03:44:11 +0000 (0:00:00.234) 0:07:46.144 ******* 2026-02-16 03:44:23.610817 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.610827 | orchestrator | 2026-02-16 03:44:23.610836 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-16 03:44:23.610846 | orchestrator | Monday 16 February 2026 03:44:11 +0000 (0:00:00.141) 0:07:46.285 ******* 2026-02-16 03:44:23.610856 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.610866 | orchestrator | 2026-02-16 03:44:23.610876 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-16 03:44:23.610885 | orchestrator | Monday 16 February 2026 03:44:12 +0000 (0:00:00.230) 0:07:46.516 ******* 2026-02-16 03:44:23.610896 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.610905 | orchestrator | 2026-02-16 03:44:23.610914 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-16 03:44:23.610923 | orchestrator | Monday 16 February 2026 03:44:12 +0000 (0:00:00.230) 0:07:46.746 ******* 2026-02-16 03:44:23.610932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 03:44:23.610941 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 03:44:23.610965 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 03:44:23.610974 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.610982 | orchestrator | 2026-02-16 03:44:23.610991 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-16 03:44:23.611000 | orchestrator | Monday 16 February 2026 03:44:12 +0000 (0:00:00.391) 0:07:47.138 ******* 2026-02-16 03:44:23.611008 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.611017 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:44:23.611026 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:44:23.611034 | orchestrator | 2026-02-16 03:44:23.611043 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-16 03:44:23.611051 | orchestrator | Monday 16 February 2026 03:44:13 +0000 (0:00:00.329) 0:07:47.468 ******* 2026-02-16 03:44:23.611060 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.611068 | orchestrator | 2026-02-16 03:44:23.611077 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-16 03:44:23.611086 | orchestrator | Monday 16 February 2026 03:44:13 +0000 (0:00:00.226) 0:07:47.694 ******* 2026-02-16 03:44:23.611094 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.611110 | orchestrator | 2026-02-16 03:44:23.611119 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-16 03:44:23.611127 | orchestrator | 2026-02-16 03:44:23.611136 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-16 03:44:23.611145 | orchestrator | Monday 16 February 2026 03:44:14 +0000 (0:00:01.289) 0:07:48.983 ******* 2026-02-16 03:44:23.611154 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:44:23.611164 | orchestrator | 2026-02-16 03:44:23.611173 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-16 03:44:23.611181 | orchestrator | Monday 16 February 2026 03:44:15 +0000 (0:00:01.246) 0:07:50.230 ******* 2026-02-16 03:44:23.611190 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:44:23.611198 | orchestrator | 2026-02-16 03:44:23.611207 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-16 03:44:23.611233 | orchestrator | Monday 16 February 2026 03:44:17 +0000 (0:00:01.309) 0:07:51.540 ******* 2026-02-16 03:44:23.611241 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.611250 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:44:23.611259 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:44:23.611268 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:44:23.611276 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:44:23.611285 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:44:23.611294 | orchestrator | 2026-02-16 03:44:23.611302 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-16 03:44:23.611311 | orchestrator | Monday 16 February 2026 03:44:18 +0000 (0:00:01.287) 0:07:52.827 ******* 2026-02-16 03:44:23.611320 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:44:23.611329 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:44:23.611337 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:44:23.611346 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:44:23.611354 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:44:23.611363 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:44:23.611372 | orchestrator | 2026-02-16 03:44:23.611380 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-16 03:44:23.611389 | orchestrator | Monday 16 February 2026 03:44:19 +0000 (0:00:00.730) 0:07:53.557 ******* 2026-02-16 03:44:23.611398 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:44:23.611406 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:44:23.611415 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:44:23.611423 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:44:23.611432 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:44:23.611441 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:44:23.611449 | orchestrator | 2026-02-16 03:44:23.611458 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-16 03:44:23.611467 | orchestrator | Monday 16 February 2026 03:44:20 +0000 (0:00:00.907) 0:07:54.465 ******* 2026-02-16 03:44:23.611475 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:44:23.611484 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:44:23.611492 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:44:23.611501 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:44:23.611510 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:44:23.611518 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:44:23.611527 | orchestrator | 2026-02-16 03:44:23.611536 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-16 03:44:23.611544 | orchestrator | Monday 16 February 2026 03:44:20 +0000 (0:00:00.724) 0:07:55.189 ******* 2026-02-16 03:44:23.611553 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.611562 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:44:23.611570 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:44:23.611583 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:44:23.611643 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:44:23.611653 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:44:23.611662 | orchestrator | 2026-02-16 03:44:23.611670 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-16 03:44:23.611679 | orchestrator | Monday 16 February 2026 03:44:22 +0000 (0:00:01.275) 0:07:56.465 ******* 2026-02-16 03:44:23.611688 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.611696 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:44:23.611705 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:44:23.611714 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:44:23.611722 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:44:23.611730 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:44:23.611739 | orchestrator | 2026-02-16 03:44:23.611747 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-16 03:44:23.611756 | orchestrator | Monday 16 February 2026 03:44:22 +0000 (0:00:00.638) 0:07:57.104 ******* 2026-02-16 03:44:23.611765 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:23.611773 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:44:23.611782 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:44:23.611790 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:44:23.611804 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:44:55.131657 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:44:55.131773 | orchestrator | 2026-02-16 03:44:55.131791 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-16 03:44:55.131804 | orchestrator | Monday 16 February 2026 03:44:23 +0000 (0:00:00.810) 0:07:57.915 ******* 2026-02-16 03:44:55.131816 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:44:55.131828 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:44:55.131897 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:44:55.131930 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:44:55.131964 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:44:55.131976 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:44:55.131987 | orchestrator | 2026-02-16 03:44:55.131998 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-16 03:44:55.132009 | orchestrator | Monday 16 February 2026 03:44:24 +0000 (0:00:01.022) 0:07:58.938 ******* 2026-02-16 03:44:55.132021 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:44:55.132032 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:44:55.132043 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:44:55.132053 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:44:55.132064 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:44:55.132075 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:44:55.132086 | orchestrator | 2026-02-16 03:44:55.132097 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-16 03:44:55.132108 | orchestrator | Monday 16 February 2026 03:44:25 +0000 (0:00:01.374) 0:08:00.312 ******* 2026-02-16 03:44:55.132119 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:55.132130 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:44:55.132142 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:44:55.132152 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:44:55.132164 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:44:55.132176 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:44:55.132187 | orchestrator | 2026-02-16 03:44:55.132200 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-16 03:44:55.132213 | orchestrator | Monday 16 February 2026 03:44:26 +0000 (0:00:00.599) 0:08:00.911 ******* 2026-02-16 03:44:55.132226 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:55.132238 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:44:55.132251 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:44:55.132336 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:44:55.132356 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:44:55.132374 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:44:55.132386 | orchestrator | 2026-02-16 03:44:55.132399 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-16 03:44:55.132437 | orchestrator | Monday 16 February 2026 03:44:27 +0000 (0:00:00.887) 0:08:01.798 ******* 2026-02-16 03:44:55.132451 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:44:55.132463 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:44:55.132476 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:44:55.132488 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:44:55.132500 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:44:55.132513 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:44:55.132525 | orchestrator | 2026-02-16 03:44:55.132537 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-16 03:44:55.132549 | orchestrator | Monday 16 February 2026 03:44:28 +0000 (0:00:00.621) 0:08:02.420 ******* 2026-02-16 03:44:55.132560 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:44:55.132571 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:44:55.132581 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:44:55.132592 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:44:55.132602 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:44:55.132613 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:44:55.132623 | orchestrator | 2026-02-16 03:44:55.132634 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-16 03:44:55.132645 | orchestrator | Monday 16 February 2026 03:44:28 +0000 (0:00:00.838) 0:08:03.258 ******* 2026-02-16 03:44:55.132656 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:44:55.132667 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:44:55.132677 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:44:55.132688 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:44:55.132750 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:44:55.132762 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:44:55.132773 | orchestrator | 2026-02-16 03:44:55.132784 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-16 03:44:55.132795 | orchestrator | Monday 16 February 2026 03:44:29 +0000 (0:00:00.623) 0:08:03.882 ******* 2026-02-16 03:44:55.132805 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:55.132816 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:44:55.132827 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:44:55.132837 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:44:55.132848 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:44:55.132858 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:44:55.132869 | orchestrator | 2026-02-16 03:44:55.132880 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-16 03:44:55.132891 | orchestrator | Monday 16 February 2026 03:44:30 +0000 (0:00:00.841) 0:08:04.723 ******* 2026-02-16 03:44:55.132901 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:55.132912 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:44:55.132939 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:44:55.132951 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:44:55.132961 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:44:55.132972 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:44:55.132983 | orchestrator | 2026-02-16 03:44:55.132994 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-16 03:44:55.133005 | orchestrator | Monday 16 February 2026 03:44:31 +0000 (0:00:00.623) 0:08:05.347 ******* 2026-02-16 03:44:55.133015 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:44:55.133026 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:44:55.133037 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:44:55.133047 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:44:55.133058 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:44:55.133069 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:44:55.133080 | orchestrator | 2026-02-16 03:44:55.133090 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-16 03:44:55.133101 | orchestrator | Monday 16 February 2026 03:44:31 +0000 (0:00:00.876) 0:08:06.223 ******* 2026-02-16 03:44:55.133112 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:44:55.133123 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:44:55.133142 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:44:55.133153 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:44:55.133183 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:44:55.133195 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:44:55.133205 | orchestrator | 2026-02-16 03:44:55.133216 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-16 03:44:55.133227 | orchestrator | Monday 16 February 2026 03:44:32 +0000 (0:00:00.648) 0:08:06.872 ******* 2026-02-16 03:44:55.133238 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:44:55.133248 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:44:55.133289 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:44:55.133300 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:44:55.133311 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:44:55.133322 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:44:55.133332 | orchestrator | 2026-02-16 03:44:55.133344 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-16 03:44:55.133355 | orchestrator | Monday 16 February 2026 03:44:33 +0000 (0:00:01.281) 0:08:08.153 ******* 2026-02-16 03:44:55.133366 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-16 03:44:55.133376 | orchestrator | 2026-02-16 03:44:55.133387 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-16 03:44:55.133398 | orchestrator | Monday 16 February 2026 03:44:37 +0000 (0:00:04.101) 0:08:12.255 ******* 2026-02-16 03:44:55.133408 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-16 03:44:55.133419 | orchestrator | 2026-02-16 03:44:55.133430 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-16 03:44:55.133441 | orchestrator | Monday 16 February 2026 03:44:40 +0000 (0:00:02.513) 0:08:14.769 ******* 2026-02-16 03:44:55.133451 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:44:55.133462 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:44:55.133473 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:44:55.133483 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:44:55.133494 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:44:55.133504 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:44:55.133515 | orchestrator | 2026-02-16 03:44:55.133526 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-16 03:44:55.133537 | orchestrator | Monday 16 February 2026 03:44:41 +0000 (0:00:01.503) 0:08:16.272 ******* 2026-02-16 03:44:55.133547 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:44:55.133558 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:44:55.133569 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:44:55.133579 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:44:55.133590 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:44:55.133600 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:44:55.133611 | orchestrator | 2026-02-16 03:44:55.133622 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-16 03:44:55.133632 | orchestrator | Monday 16 February 2026 03:44:43 +0000 (0:00:01.206) 0:08:17.478 ******* 2026-02-16 03:44:55.133645 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:44:55.133657 | orchestrator | 2026-02-16 03:44:55.133668 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-16 03:44:55.133679 | orchestrator | Monday 16 February 2026 03:44:44 +0000 (0:00:01.280) 0:08:18.759 ******* 2026-02-16 03:44:55.133690 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:44:55.133700 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:44:55.133711 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:44:55.133721 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:44:55.133732 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:44:55.133743 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:44:55.133753 | orchestrator | 2026-02-16 03:44:55.133764 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-16 03:44:55.133775 | orchestrator | Monday 16 February 2026 03:44:45 +0000 (0:00:01.553) 0:08:20.312 ******* 2026-02-16 03:44:55.133797 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:44:55.133808 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:44:55.133819 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:44:55.133829 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:44:55.133840 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:44:55.133850 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:44:55.133861 | orchestrator | 2026-02-16 03:44:55.133872 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-16 03:44:55.133882 | orchestrator | Monday 16 February 2026 03:44:49 +0000 (0:00:03.690) 0:08:24.003 ******* 2026-02-16 03:44:55.133894 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:44:55.133904 | orchestrator | 2026-02-16 03:44:55.133915 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-16 03:44:55.133926 | orchestrator | Monday 16 February 2026 03:44:51 +0000 (0:00:01.408) 0:08:25.411 ******* 2026-02-16 03:44:55.133937 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:44:55.133953 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:44:55.133964 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:44:55.133975 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:44:55.133986 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:44:55.133996 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:44:55.134007 | orchestrator | 2026-02-16 03:44:55.134066 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-16 03:44:55.134081 | orchestrator | Monday 16 February 2026 03:44:51 +0000 (0:00:00.636) 0:08:26.048 ******* 2026-02-16 03:44:55.134092 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:44:55.134103 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:44:55.134114 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:44:55.134124 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:44:55.134135 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:44:55.134145 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:44:55.134156 | orchestrator | 2026-02-16 03:44:55.134167 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-16 03:44:55.134177 | orchestrator | Monday 16 February 2026 03:44:54 +0000 (0:00:02.450) 0:08:28.498 ******* 2026-02-16 03:44:55.134188 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:44:55.134199 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:44:55.134209 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:44:55.134220 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:44:55.134239 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:45:23.489626 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:45:23.489750 | orchestrator | 2026-02-16 03:45:23.489768 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-16 03:45:23.489780 | orchestrator | 2026-02-16 03:45:23.489791 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-16 03:45:23.489802 | orchestrator | Monday 16 February 2026 03:44:55 +0000 (0:00:00.938) 0:08:29.437 ******* 2026-02-16 03:45:23.489840 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:45:23.489871 | orchestrator | 2026-02-16 03:45:23.489889 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-16 03:45:23.489907 | orchestrator | Monday 16 February 2026 03:44:55 +0000 (0:00:00.818) 0:08:30.256 ******* 2026-02-16 03:45:23.489924 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:45:23.489940 | orchestrator | 2026-02-16 03:45:23.489955 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-16 03:45:23.489971 | orchestrator | Monday 16 February 2026 03:44:56 +0000 (0:00:00.793) 0:08:31.050 ******* 2026-02-16 03:45:23.490001 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:45:23.490107 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:45:23.490127 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:45:23.490142 | orchestrator | 2026-02-16 03:45:23.490159 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-16 03:45:23.490174 | orchestrator | Monday 16 February 2026 03:44:57 +0000 (0:00:00.326) 0:08:31.376 ******* 2026-02-16 03:45:23.490192 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:45:23.490210 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:45:23.490227 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:45:23.490245 | orchestrator | 2026-02-16 03:45:23.490261 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-16 03:45:23.490277 | orchestrator | Monday 16 February 2026 03:44:57 +0000 (0:00:00.735) 0:08:32.112 ******* 2026-02-16 03:45:23.490375 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:45:23.490394 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:45:23.490410 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:45:23.490425 | orchestrator | 2026-02-16 03:45:23.490441 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-16 03:45:23.490457 | orchestrator | Monday 16 February 2026 03:44:58 +0000 (0:00:00.735) 0:08:32.847 ******* 2026-02-16 03:45:23.490473 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:45:23.490488 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:45:23.490503 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:45:23.490517 | orchestrator | 2026-02-16 03:45:23.490533 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-16 03:45:23.490550 | orchestrator | Monday 16 February 2026 03:44:59 +0000 (0:00:01.038) 0:08:33.885 ******* 2026-02-16 03:45:23.490566 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:45:23.490583 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:45:23.490599 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:45:23.490614 | orchestrator | 2026-02-16 03:45:23.490631 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-16 03:45:23.490648 | orchestrator | Monday 16 February 2026 03:44:59 +0000 (0:00:00.344) 0:08:34.229 ******* 2026-02-16 03:45:23.490666 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:45:23.490678 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:45:23.490688 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:45:23.490698 | orchestrator | 2026-02-16 03:45:23.490708 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-16 03:45:23.490718 | orchestrator | Monday 16 February 2026 03:45:00 +0000 (0:00:00.328) 0:08:34.558 ******* 2026-02-16 03:45:23.490728 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:45:23.490737 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:45:23.490747 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:45:23.490757 | orchestrator | 2026-02-16 03:45:23.490771 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-16 03:45:23.490783 | orchestrator | Monday 16 February 2026 03:45:00 +0000 (0:00:00.327) 0:08:34.886 ******* 2026-02-16 03:45:23.490797 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:45:23.490808 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:45:23.490821 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:45:23.490834 | orchestrator | 2026-02-16 03:45:23.490846 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-16 03:45:23.490857 | orchestrator | Monday 16 February 2026 03:45:01 +0000 (0:00:01.030) 0:08:35.916 ******* 2026-02-16 03:45:23.490869 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:45:23.490883 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:45:23.490895 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:45:23.490908 | orchestrator | 2026-02-16 03:45:23.490922 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-16 03:45:23.490954 | orchestrator | Monday 16 February 2026 03:45:02 +0000 (0:00:00.737) 0:08:36.654 ******* 2026-02-16 03:45:23.490968 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:45:23.490982 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:45:23.490995 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:45:23.491023 | orchestrator | 2026-02-16 03:45:23.491038 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-16 03:45:23.491053 | orchestrator | Monday 16 February 2026 03:45:02 +0000 (0:00:00.320) 0:08:36.974 ******* 2026-02-16 03:45:23.491066 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:45:23.491078 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:45:23.491086 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:45:23.491094 | orchestrator | 2026-02-16 03:45:23.491102 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-16 03:45:23.491110 | orchestrator | Monday 16 February 2026 03:45:02 +0000 (0:00:00.298) 0:08:37.272 ******* 2026-02-16 03:45:23.491118 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:45:23.491126 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:45:23.491133 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:45:23.491141 | orchestrator | 2026-02-16 03:45:23.491149 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-16 03:45:23.491157 | orchestrator | Monday 16 February 2026 03:45:03 +0000 (0:00:00.588) 0:08:37.861 ******* 2026-02-16 03:45:23.491186 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:45:23.491194 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:45:23.491202 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:45:23.491210 | orchestrator | 2026-02-16 03:45:23.491218 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-16 03:45:23.491226 | orchestrator | Monday 16 February 2026 03:45:03 +0000 (0:00:00.353) 0:08:38.214 ******* 2026-02-16 03:45:23.491234 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:45:23.491242 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:45:23.491250 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:45:23.491258 | orchestrator | 2026-02-16 03:45:23.491266 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-16 03:45:23.491274 | orchestrator | Monday 16 February 2026 03:45:04 +0000 (0:00:00.343) 0:08:38.558 ******* 2026-02-16 03:45:23.491281 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:45:23.491314 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:45:23.491323 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:45:23.491331 | orchestrator | 2026-02-16 03:45:23.491339 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-16 03:45:23.491347 | orchestrator | Monday 16 February 2026 03:45:04 +0000 (0:00:00.350) 0:08:38.908 ******* 2026-02-16 03:45:23.491355 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:45:23.491362 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:45:23.491370 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:45:23.491378 | orchestrator | 2026-02-16 03:45:23.491386 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-16 03:45:23.491394 | orchestrator | Monday 16 February 2026 03:45:05 +0000 (0:00:00.612) 0:08:39.521 ******* 2026-02-16 03:45:23.491401 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:45:23.491409 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:45:23.491417 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:45:23.491424 | orchestrator | 2026-02-16 03:45:23.491432 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-16 03:45:23.491440 | orchestrator | Monday 16 February 2026 03:45:05 +0000 (0:00:00.318) 0:08:39.840 ******* 2026-02-16 03:45:23.491448 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:45:23.491456 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:45:23.491463 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:45:23.491471 | orchestrator | 2026-02-16 03:45:23.491479 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-16 03:45:23.491487 | orchestrator | Monday 16 February 2026 03:45:05 +0000 (0:00:00.355) 0:08:40.195 ******* 2026-02-16 03:45:23.491495 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:45:23.491503 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:45:23.491511 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:45:23.491518 | orchestrator | 2026-02-16 03:45:23.491526 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-16 03:45:23.491542 | orchestrator | Monday 16 February 2026 03:45:06 +0000 (0:00:00.795) 0:08:40.990 ******* 2026-02-16 03:45:23.491550 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:45:23.491558 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:45:23.491566 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-16 03:45:23.491575 | orchestrator | 2026-02-16 03:45:23.491583 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-16 03:45:23.491591 | orchestrator | Monday 16 February 2026 03:45:07 +0000 (0:00:00.415) 0:08:41.405 ******* 2026-02-16 03:45:23.491599 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-16 03:45:23.491607 | orchestrator | 2026-02-16 03:45:23.491617 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-16 03:45:23.491631 | orchestrator | Monday 16 February 2026 03:45:09 +0000 (0:00:02.055) 0:08:43.461 ******* 2026-02-16 03:45:23.491646 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-16 03:45:23.491662 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:45:23.491676 | orchestrator | 2026-02-16 03:45:23.491689 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-16 03:45:23.491697 | orchestrator | Monday 16 February 2026 03:45:09 +0000 (0:00:00.219) 0:08:43.680 ******* 2026-02-16 03:45:23.491707 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-16 03:45:23.491728 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-16 03:45:23.491737 | orchestrator | 2026-02-16 03:45:23.491745 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-16 03:45:23.491753 | orchestrator | Monday 16 February 2026 03:45:17 +0000 (0:00:08.549) 0:08:52.229 ******* 2026-02-16 03:45:23.491761 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-16 03:45:23.491769 | orchestrator | 2026-02-16 03:45:23.491777 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-16 03:45:23.491785 | orchestrator | Monday 16 February 2026 03:45:21 +0000 (0:00:03.731) 0:08:55.961 ******* 2026-02-16 03:45:23.491793 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:45:23.491802 | orchestrator | 2026-02-16 03:45:23.491810 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-16 03:45:23.491818 | orchestrator | Monday 16 February 2026 03:45:22 +0000 (0:00:00.798) 0:08:56.760 ******* 2026-02-16 03:45:23.491840 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-16 03:45:49.237808 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-16 03:45:49.237950 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-16 03:45:49.237977 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-16 03:45:49.237997 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-16 03:45:49.238009 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-16 03:45:49.238089 | orchestrator | 2026-02-16 03:45:49.238103 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-16 03:45:49.238114 | orchestrator | Monday 16 February 2026 03:45:23 +0000 (0:00:01.035) 0:08:57.795 ******* 2026-02-16 03:45:49.238125 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:45:49.238160 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-16 03:45:49.238172 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-16 03:45:49.238183 | orchestrator | 2026-02-16 03:45:49.238194 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-16 03:45:49.238205 | orchestrator | Monday 16 February 2026 03:45:25 +0000 (0:00:02.178) 0:08:59.974 ******* 2026-02-16 03:45:49.238216 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-16 03:45:49.238227 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-16 03:45:49.238238 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:45:49.238249 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-16 03:45:49.238260 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-16 03:45:49.238272 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:45:49.238283 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-16 03:45:49.238293 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-16 03:45:49.238304 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:45:49.238315 | orchestrator | 2026-02-16 03:45:49.238357 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-16 03:45:49.238372 | orchestrator | Monday 16 February 2026 03:45:27 +0000 (0:00:01.416) 0:09:01.390 ******* 2026-02-16 03:45:49.238384 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:45:49.238397 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:45:49.238408 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:45:49.238420 | orchestrator | 2026-02-16 03:45:49.238433 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-16 03:45:49.238445 | orchestrator | Monday 16 February 2026 03:45:29 +0000 (0:00:02.733) 0:09:04.124 ******* 2026-02-16 03:45:49.238458 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:45:49.238470 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:45:49.238482 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:45:49.238494 | orchestrator | 2026-02-16 03:45:49.238507 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-16 03:45:49.238519 | orchestrator | Monday 16 February 2026 03:45:30 +0000 (0:00:00.329) 0:09:04.453 ******* 2026-02-16 03:45:49.238532 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:45:49.238546 | orchestrator | 2026-02-16 03:45:49.238558 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-16 03:45:49.238571 | orchestrator | Monday 16 February 2026 03:45:30 +0000 (0:00:00.813) 0:09:05.267 ******* 2026-02-16 03:45:49.238583 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:45:49.238597 | orchestrator | 2026-02-16 03:45:49.238608 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-16 03:45:49.238621 | orchestrator | Monday 16 February 2026 03:45:31 +0000 (0:00:00.543) 0:09:05.811 ******* 2026-02-16 03:45:49.238633 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:45:49.238645 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:45:49.238658 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:45:49.238670 | orchestrator | 2026-02-16 03:45:49.238683 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-16 03:45:49.238695 | orchestrator | Monday 16 February 2026 03:45:32 +0000 (0:00:01.211) 0:09:07.022 ******* 2026-02-16 03:45:49.238706 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:45:49.238717 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:45:49.238727 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:45:49.238738 | orchestrator | 2026-02-16 03:45:49.238749 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-16 03:45:49.238774 | orchestrator | Monday 16 February 2026 03:45:34 +0000 (0:00:01.380) 0:09:08.403 ******* 2026-02-16 03:45:49.238785 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:45:49.238804 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:45:49.238815 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:45:49.238825 | orchestrator | 2026-02-16 03:45:49.238836 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-16 03:45:49.238847 | orchestrator | Monday 16 February 2026 03:45:35 +0000 (0:00:01.860) 0:09:10.263 ******* 2026-02-16 03:45:49.238858 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:45:49.238869 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:45:49.238879 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:45:49.238890 | orchestrator | 2026-02-16 03:45:49.238901 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-16 03:45:49.238912 | orchestrator | Monday 16 February 2026 03:45:37 +0000 (0:00:01.953) 0:09:12.216 ******* 2026-02-16 03:45:49.238923 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:45:49.238934 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:45:49.238944 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:45:49.238955 | orchestrator | 2026-02-16 03:45:49.238966 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-16 03:45:49.238977 | orchestrator | Monday 16 February 2026 03:45:39 +0000 (0:00:01.498) 0:09:13.714 ******* 2026-02-16 03:45:49.238988 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:45:49.238999 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:45:49.239031 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:45:49.239042 | orchestrator | 2026-02-16 03:45:49.239054 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-16 03:45:49.239065 | orchestrator | Monday 16 February 2026 03:45:40 +0000 (0:00:00.670) 0:09:14.385 ******* 2026-02-16 03:45:49.239076 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:45:49.239087 | orchestrator | 2026-02-16 03:45:49.239098 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-16 03:45:49.239187 | orchestrator | Monday 16 February 2026 03:45:40 +0000 (0:00:00.751) 0:09:15.136 ******* 2026-02-16 03:45:49.239200 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:45:49.239211 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:45:49.239222 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:45:49.239239 | orchestrator | 2026-02-16 03:45:49.239259 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-16 03:45:49.239277 | orchestrator | Monday 16 February 2026 03:45:41 +0000 (0:00:00.335) 0:09:15.472 ******* 2026-02-16 03:45:49.239296 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:45:49.239314 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:45:49.239360 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:45:49.239379 | orchestrator | 2026-02-16 03:45:49.239397 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-16 03:45:49.239408 | orchestrator | Monday 16 February 2026 03:45:42 +0000 (0:00:01.171) 0:09:16.643 ******* 2026-02-16 03:45:49.239419 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 03:45:49.239430 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 03:45:49.239441 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 03:45:49.239452 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:45:49.239463 | orchestrator | 2026-02-16 03:45:49.239474 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-16 03:45:49.239485 | orchestrator | Monday 16 February 2026 03:45:43 +0000 (0:00:00.861) 0:09:17.505 ******* 2026-02-16 03:45:49.239496 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:45:49.239506 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:45:49.239517 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:45:49.239528 | orchestrator | 2026-02-16 03:45:49.239539 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-16 03:45:49.239550 | orchestrator | 2026-02-16 03:45:49.239561 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-16 03:45:49.239572 | orchestrator | Monday 16 February 2026 03:45:44 +0000 (0:00:00.839) 0:09:18.345 ******* 2026-02-16 03:45:49.239593 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:45:49.239606 | orchestrator | 2026-02-16 03:45:49.239617 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-16 03:45:49.239628 | orchestrator | Monday 16 February 2026 03:45:44 +0000 (0:00:00.518) 0:09:18.864 ******* 2026-02-16 03:45:49.239639 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:45:49.239650 | orchestrator | 2026-02-16 03:45:49.239661 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-16 03:45:49.239672 | orchestrator | Monday 16 February 2026 03:45:45 +0000 (0:00:00.742) 0:09:19.607 ******* 2026-02-16 03:45:49.239683 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:45:49.239694 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:45:49.239704 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:45:49.239715 | orchestrator | 2026-02-16 03:45:49.239726 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-16 03:45:49.239737 | orchestrator | Monday 16 February 2026 03:45:45 +0000 (0:00:00.311) 0:09:19.918 ******* 2026-02-16 03:45:49.239748 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:45:49.239758 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:45:49.239769 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:45:49.239780 | orchestrator | 2026-02-16 03:45:49.239798 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-16 03:45:49.239817 | orchestrator | Monday 16 February 2026 03:45:46 +0000 (0:00:00.697) 0:09:20.616 ******* 2026-02-16 03:45:49.239835 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:45:49.239853 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:45:49.239870 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:45:49.239886 | orchestrator | 2026-02-16 03:45:49.239903 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-16 03:45:49.239921 | orchestrator | Monday 16 February 2026 03:45:47 +0000 (0:00:00.979) 0:09:21.595 ******* 2026-02-16 03:45:49.239940 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:45:49.239958 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:45:49.239985 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:45:49.240005 | orchestrator | 2026-02-16 03:45:49.240018 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-16 03:45:49.240029 | orchestrator | Monday 16 February 2026 03:45:47 +0000 (0:00:00.723) 0:09:22.318 ******* 2026-02-16 03:45:49.240039 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:45:49.240050 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:45:49.240061 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:45:49.240072 | orchestrator | 2026-02-16 03:45:49.240083 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-16 03:45:49.240094 | orchestrator | Monday 16 February 2026 03:45:48 +0000 (0:00:00.338) 0:09:22.657 ******* 2026-02-16 03:45:49.240104 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:45:49.240115 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:45:49.240126 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:45:49.240137 | orchestrator | 2026-02-16 03:45:49.240148 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-16 03:45:49.240159 | orchestrator | Monday 16 February 2026 03:45:48 +0000 (0:00:00.305) 0:09:22.963 ******* 2026-02-16 03:45:49.240169 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:45:49.240180 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:45:49.240191 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:45:49.240202 | orchestrator | 2026-02-16 03:45:49.240224 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-16 03:46:10.562250 | orchestrator | Monday 16 February 2026 03:45:49 +0000 (0:00:00.575) 0:09:23.539 ******* 2026-02-16 03:46:10.562372 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:46:10.562386 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:46:10.562424 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:46:10.562434 | orchestrator | 2026-02-16 03:46:10.562446 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-16 03:46:10.562457 | orchestrator | Monday 16 February 2026 03:45:49 +0000 (0:00:00.762) 0:09:24.301 ******* 2026-02-16 03:46:10.562467 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:46:10.562477 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:46:10.562488 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:46:10.562521 | orchestrator | 2026-02-16 03:46:10.562533 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-16 03:46:10.562544 | orchestrator | Monday 16 February 2026 03:45:50 +0000 (0:00:00.737) 0:09:25.038 ******* 2026-02-16 03:46:10.562555 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:46:10.562568 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:46:10.562579 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:46:10.562590 | orchestrator | 2026-02-16 03:46:10.562601 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-16 03:46:10.562612 | orchestrator | Monday 16 February 2026 03:45:51 +0000 (0:00:00.308) 0:09:25.347 ******* 2026-02-16 03:46:10.562623 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:46:10.562634 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:46:10.562645 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:46:10.562656 | orchestrator | 2026-02-16 03:46:10.562667 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-16 03:46:10.562679 | orchestrator | Monday 16 February 2026 03:45:51 +0000 (0:00:00.527) 0:09:25.874 ******* 2026-02-16 03:46:10.562691 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:46:10.562702 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:46:10.562713 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:46:10.562724 | orchestrator | 2026-02-16 03:46:10.562735 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-16 03:46:10.562746 | orchestrator | Monday 16 February 2026 03:45:51 +0000 (0:00:00.345) 0:09:26.219 ******* 2026-02-16 03:46:10.562757 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:46:10.562768 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:46:10.562779 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:46:10.562790 | orchestrator | 2026-02-16 03:46:10.562801 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-16 03:46:10.562812 | orchestrator | Monday 16 February 2026 03:45:52 +0000 (0:00:00.335) 0:09:26.554 ******* 2026-02-16 03:46:10.562824 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:46:10.562835 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:46:10.562845 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:46:10.562856 | orchestrator | 2026-02-16 03:46:10.562867 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-16 03:46:10.562878 | orchestrator | Monday 16 February 2026 03:45:52 +0000 (0:00:00.314) 0:09:26.869 ******* 2026-02-16 03:46:10.562890 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:46:10.562901 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:46:10.562912 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:46:10.562923 | orchestrator | 2026-02-16 03:46:10.562934 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-16 03:46:10.562946 | orchestrator | Monday 16 February 2026 03:45:53 +0000 (0:00:00.547) 0:09:27.417 ******* 2026-02-16 03:46:10.562956 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:46:10.562967 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:46:10.562978 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:46:10.562989 | orchestrator | 2026-02-16 03:46:10.563000 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-16 03:46:10.563033 | orchestrator | Monday 16 February 2026 03:45:53 +0000 (0:00:00.308) 0:09:27.725 ******* 2026-02-16 03:46:10.563046 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:46:10.563055 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:46:10.563066 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:46:10.563076 | orchestrator | 2026-02-16 03:46:10.563095 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-16 03:46:10.563106 | orchestrator | Monday 16 February 2026 03:45:53 +0000 (0:00:00.323) 0:09:28.049 ******* 2026-02-16 03:46:10.563117 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:46:10.563127 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:46:10.563138 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:46:10.563148 | orchestrator | 2026-02-16 03:46:10.563159 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-16 03:46:10.563170 | orchestrator | Monday 16 February 2026 03:45:54 +0000 (0:00:00.353) 0:09:28.402 ******* 2026-02-16 03:46:10.563180 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:46:10.563191 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:46:10.563201 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:46:10.563212 | orchestrator | 2026-02-16 03:46:10.563236 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-16 03:46:10.563247 | orchestrator | Monday 16 February 2026 03:45:54 +0000 (0:00:00.907) 0:09:29.310 ******* 2026-02-16 03:46:10.563258 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:46:10.563271 | orchestrator | 2026-02-16 03:46:10.563281 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-16 03:46:10.563292 | orchestrator | Monday 16 February 2026 03:45:55 +0000 (0:00:00.568) 0:09:29.879 ******* 2026-02-16 03:46:10.563303 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:46:10.563314 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-16 03:46:10.563325 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-16 03:46:10.563335 | orchestrator | 2026-02-16 03:46:10.563346 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-16 03:46:10.563357 | orchestrator | Monday 16 February 2026 03:45:58 +0000 (0:00:02.475) 0:09:32.355 ******* 2026-02-16 03:46:10.563368 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-16 03:46:10.563379 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-16 03:46:10.563390 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:46:10.563420 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-16 03:46:10.563431 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-16 03:46:10.563442 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:46:10.563453 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-16 03:46:10.563464 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-16 03:46:10.563475 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:46:10.563485 | orchestrator | 2026-02-16 03:46:10.563494 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-16 03:46:10.563504 | orchestrator | Monday 16 February 2026 03:45:59 +0000 (0:00:01.517) 0:09:33.873 ******* 2026-02-16 03:46:10.563514 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:46:10.563523 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:46:10.563533 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:46:10.563543 | orchestrator | 2026-02-16 03:46:10.563553 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-16 03:46:10.563564 | orchestrator | Monday 16 February 2026 03:45:59 +0000 (0:00:00.344) 0:09:34.217 ******* 2026-02-16 03:46:10.563574 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:46:10.563585 | orchestrator | 2026-02-16 03:46:10.563596 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-16 03:46:10.563607 | orchestrator | Monday 16 February 2026 03:46:00 +0000 (0:00:00.789) 0:09:35.006 ******* 2026-02-16 03:46:10.563619 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-16 03:46:10.563631 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-16 03:46:10.563650 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-16 03:46:10.563661 | orchestrator | 2026-02-16 03:46:10.563670 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-16 03:46:10.563680 | orchestrator | Monday 16 February 2026 03:46:01 +0000 (0:00:00.839) 0:09:35.846 ******* 2026-02-16 03:46:10.563690 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:46:10.563700 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-16 03:46:10.563710 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:46:10.563719 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-16 03:46:10.563730 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:46:10.563740 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-16 03:46:10.563750 | orchestrator | 2026-02-16 03:46:10.563760 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-16 03:46:10.563770 | orchestrator | Monday 16 February 2026 03:46:05 +0000 (0:00:04.440) 0:09:40.286 ******* 2026-02-16 03:46:10.563781 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:46:10.563791 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-16 03:46:10.563802 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:46:10.563812 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:46:10.563822 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-16 03:46:10.563833 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-16 03:46:10.563843 | orchestrator | 2026-02-16 03:46:10.563851 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-16 03:46:10.563857 | orchestrator | Monday 16 February 2026 03:46:08 +0000 (0:00:02.288) 0:09:42.574 ******* 2026-02-16 03:46:10.563870 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-16 03:46:10.563876 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:46:10.563882 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-16 03:46:10.563889 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:46:10.563895 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-16 03:46:10.563901 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:46:10.563907 | orchestrator | 2026-02-16 03:46:10.563913 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-16 03:46:10.563920 | orchestrator | Monday 16 February 2026 03:46:09 +0000 (0:00:01.443) 0:09:44.018 ******* 2026-02-16 03:46:10.563926 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-16 03:46:10.563932 | orchestrator | 2026-02-16 03:46:10.563938 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-16 03:46:10.563944 | orchestrator | Monday 16 February 2026 03:46:09 +0000 (0:00:00.231) 0:09:44.249 ******* 2026-02-16 03:46:10.563950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 03:46:10.563958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 03:46:10.563974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 03:46:54.639229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 03:46:54.639330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 03:46:54.639345 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:46:54.639357 | orchestrator | 2026-02-16 03:46:54.639369 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-16 03:46:54.639403 | orchestrator | Monday 16 February 2026 03:46:10 +0000 (0:00:00.618) 0:09:44.867 ******* 2026-02-16 03:46:54.639414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 03:46:54.639424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 03:46:54.639435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 03:46:54.639445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 03:46:54.639456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 03:46:54.639466 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:46:54.639476 | orchestrator | 2026-02-16 03:46:54.639487 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-16 03:46:54.639497 | orchestrator | Monday 16 February 2026 03:46:11 +0000 (0:00:00.586) 0:09:45.454 ******* 2026-02-16 03:46:54.639508 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-16 03:46:54.639519 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-16 03:46:54.639530 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-16 03:46:54.639540 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-16 03:46:54.639550 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-16 03:46:54.639560 | orchestrator | 2026-02-16 03:46:54.639570 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-16 03:46:54.639580 | orchestrator | Monday 16 February 2026 03:46:42 +0000 (0:00:31.353) 0:10:16.807 ******* 2026-02-16 03:46:54.639591 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:46:54.639601 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:46:54.639611 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:46:54.639621 | orchestrator | 2026-02-16 03:46:54.639631 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-16 03:46:54.639641 | orchestrator | Monday 16 February 2026 03:46:42 +0000 (0:00:00.319) 0:10:17.126 ******* 2026-02-16 03:46:54.639651 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:46:54.639661 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:46:54.639671 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:46:54.639681 | orchestrator | 2026-02-16 03:46:54.639691 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-16 03:46:54.639701 | orchestrator | Monday 16 February 2026 03:46:43 +0000 (0:00:00.309) 0:10:17.436 ******* 2026-02-16 03:46:54.639726 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:46:54.639759 | orchestrator | 2026-02-16 03:46:54.639771 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-16 03:46:54.639782 | orchestrator | Monday 16 February 2026 03:46:43 +0000 (0:00:00.789) 0:10:18.225 ******* 2026-02-16 03:46:54.639792 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:46:54.639803 | orchestrator | 2026-02-16 03:46:54.639813 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-16 03:46:54.639824 | orchestrator | Monday 16 February 2026 03:46:44 +0000 (0:00:00.748) 0:10:18.974 ******* 2026-02-16 03:46:54.639835 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:46:54.639845 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:46:54.639857 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:46:54.639864 | orchestrator | 2026-02-16 03:46:54.639872 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-16 03:46:54.639879 | orchestrator | Monday 16 February 2026 03:46:45 +0000 (0:00:01.248) 0:10:20.222 ******* 2026-02-16 03:46:54.639887 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:46:54.639893 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:46:54.639901 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:46:54.639908 | orchestrator | 2026-02-16 03:46:54.639915 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-16 03:46:54.639921 | orchestrator | Monday 16 February 2026 03:46:47 +0000 (0:00:01.157) 0:10:21.380 ******* 2026-02-16 03:46:54.639928 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:46:54.639947 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:46:54.639954 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:46:54.639963 | orchestrator | 2026-02-16 03:46:54.639973 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-16 03:46:54.639983 | orchestrator | Monday 16 February 2026 03:46:48 +0000 (0:00:01.758) 0:10:23.139 ******* 2026-02-16 03:46:54.639993 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-16 03:46:54.640003 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-16 03:46:54.640014 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-16 03:46:54.640023 | orchestrator | 2026-02-16 03:46:54.640033 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-16 03:46:54.640043 | orchestrator | Monday 16 February 2026 03:46:51 +0000 (0:00:02.622) 0:10:25.761 ******* 2026-02-16 03:46:54.640052 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:46:54.640062 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:46:54.640072 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:46:54.640083 | orchestrator | 2026-02-16 03:46:54.640092 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-16 03:46:54.640102 | orchestrator | Monday 16 February 2026 03:46:51 +0000 (0:00:00.337) 0:10:26.099 ******* 2026-02-16 03:46:54.640112 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:46:54.640122 | orchestrator | 2026-02-16 03:46:54.640132 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-16 03:46:54.640142 | orchestrator | Monday 16 February 2026 03:46:52 +0000 (0:00:00.763) 0:10:26.862 ******* 2026-02-16 03:46:54.640152 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:46:54.640163 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:46:54.640172 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:46:54.640182 | orchestrator | 2026-02-16 03:46:54.640191 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-16 03:46:54.640201 | orchestrator | Monday 16 February 2026 03:46:52 +0000 (0:00:00.335) 0:10:27.198 ******* 2026-02-16 03:46:54.640211 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:46:54.640230 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:46:54.640239 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:46:54.640249 | orchestrator | 2026-02-16 03:46:54.640258 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-16 03:46:54.640268 | orchestrator | Monday 16 February 2026 03:46:53 +0000 (0:00:00.328) 0:10:27.526 ******* 2026-02-16 03:46:54.640277 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 03:46:54.640287 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 03:46:54.640296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 03:46:54.640306 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:46:54.640315 | orchestrator | 2026-02-16 03:46:54.640325 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-16 03:46:54.640335 | orchestrator | Monday 16 February 2026 03:46:54 +0000 (0:00:00.885) 0:10:28.411 ******* 2026-02-16 03:46:54.640344 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:46:54.640354 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:46:54.640363 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:46:54.640373 | orchestrator | 2026-02-16 03:46:54.640472 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:46:54.640483 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-16 03:46:54.640494 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-16 03:46:54.640503 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-16 03:46:54.640519 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-16 03:46:54.640529 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-16 03:46:54.640539 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-16 03:46:54.640548 | orchestrator | 2026-02-16 03:46:54.640558 | orchestrator | 2026-02-16 03:46:54.640567 | orchestrator | 2026-02-16 03:46:54.640577 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:46:54.640587 | orchestrator | Monday 16 February 2026 03:46:54 +0000 (0:00:00.525) 0:10:28.937 ******* 2026-02-16 03:46:54.640597 | orchestrator | =============================================================================== 2026-02-16 03:46:54.640606 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 57.13s 2026-02-16 03:46:54.640616 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 38.70s 2026-02-16 03:46:54.640625 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.35s 2026-02-16 03:46:54.640635 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.29s 2026-02-16 03:46:54.640644 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.96s 2026-02-16 03:46:54.640661 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.77s 2026-02-16 03:46:55.080565 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.72s 2026-02-16 03:46:55.080645 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.57s 2026-02-16 03:46:55.080654 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.22s 2026-02-16 03:46:55.080662 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.55s 2026-02-16 03:46:55.080669 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.62s 2026-02-16 03:46:55.080676 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.44s 2026-02-16 03:46:55.080703 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.09s 2026-02-16 03:46:55.080711 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.44s 2026-02-16 03:46:55.080718 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.10s 2026-02-16 03:46:55.080724 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.73s 2026-02-16 03:46:55.080731 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.69s 2026-02-16 03:46:55.080738 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.59s 2026-02-16 03:46:55.080745 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.49s 2026-02-16 03:46:55.080752 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.27s 2026-02-16 03:46:57.424269 | orchestrator | 2026-02-16 03:46:57 | INFO  | Task 00dceb68-f232-4814-87d2-ab3e6314820f (ceph-pools) was prepared for execution. 2026-02-16 03:46:57.424371 | orchestrator | 2026-02-16 03:46:57 | INFO  | It takes a moment until task 00dceb68-f232-4814-87d2-ab3e6314820f (ceph-pools) has been started and output is visible here. 2026-02-16 03:47:11.467866 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-16 03:47:11.467971 | orchestrator | 2.16.14 2026-02-16 03:47:11.467987 | orchestrator | 2026-02-16 03:47:11.467999 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-16 03:47:11.468010 | orchestrator | 2026-02-16 03:47:11.468020 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-16 03:47:11.468031 | orchestrator | Monday 16 February 2026 03:47:01 +0000 (0:00:00.601) 0:00:00.601 ******* 2026-02-16 03:47:11.468041 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:47:11.468051 | orchestrator | 2026-02-16 03:47:11.468061 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-16 03:47:11.468071 | orchestrator | Monday 16 February 2026 03:47:02 +0000 (0:00:00.629) 0:00:01.231 ******* 2026-02-16 03:47:11.468081 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:47:11.468090 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:47:11.468100 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:47:11.468110 | orchestrator | 2026-02-16 03:47:11.468120 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-16 03:47:11.468129 | orchestrator | Monday 16 February 2026 03:47:03 +0000 (0:00:00.643) 0:00:01.875 ******* 2026-02-16 03:47:11.468139 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:47:11.468197 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:47:11.468207 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:47:11.468217 | orchestrator | 2026-02-16 03:47:11.468226 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-16 03:47:11.468236 | orchestrator | Monday 16 February 2026 03:47:03 +0000 (0:00:00.295) 0:00:02.170 ******* 2026-02-16 03:47:11.468246 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:47:11.468255 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:47:11.468265 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:47:11.468275 | orchestrator | 2026-02-16 03:47:11.468284 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-16 03:47:11.468294 | orchestrator | Monday 16 February 2026 03:47:04 +0000 (0:00:00.815) 0:00:02.986 ******* 2026-02-16 03:47:11.468303 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:47:11.468313 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:47:11.468323 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:47:11.468332 | orchestrator | 2026-02-16 03:47:11.468357 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-16 03:47:11.468367 | orchestrator | Monday 16 February 2026 03:47:04 +0000 (0:00:00.320) 0:00:03.306 ******* 2026-02-16 03:47:11.468377 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:47:11.468408 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:47:11.468419 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:47:11.468430 | orchestrator | 2026-02-16 03:47:11.468441 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-16 03:47:11.468452 | orchestrator | Monday 16 February 2026 03:47:04 +0000 (0:00:00.306) 0:00:03.613 ******* 2026-02-16 03:47:11.468463 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:47:11.468474 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:47:11.468485 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:47:11.468494 | orchestrator | 2026-02-16 03:47:11.468504 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-16 03:47:11.468513 | orchestrator | Monday 16 February 2026 03:47:05 +0000 (0:00:00.347) 0:00:03.960 ******* 2026-02-16 03:47:11.468523 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:11.468535 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:47:11.468544 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:47:11.468554 | orchestrator | 2026-02-16 03:47:11.468564 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-16 03:47:11.468573 | orchestrator | Monday 16 February 2026 03:47:05 +0000 (0:00:00.515) 0:00:04.476 ******* 2026-02-16 03:47:11.468583 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:47:11.468592 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:47:11.468601 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:47:11.468611 | orchestrator | 2026-02-16 03:47:11.468620 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-16 03:47:11.468629 | orchestrator | Monday 16 February 2026 03:47:06 +0000 (0:00:00.338) 0:00:04.815 ******* 2026-02-16 03:47:11.468639 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-16 03:47:11.468665 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 03:47:11.468675 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 03:47:11.468685 | orchestrator | 2026-02-16 03:47:11.468694 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-16 03:47:11.468704 | orchestrator | Monday 16 February 2026 03:47:06 +0000 (0:00:00.747) 0:00:05.562 ******* 2026-02-16 03:47:11.468720 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:47:11.468736 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:47:11.468751 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:47:11.468766 | orchestrator | 2026-02-16 03:47:11.468782 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-16 03:47:11.468798 | orchestrator | Monday 16 February 2026 03:47:07 +0000 (0:00:00.435) 0:00:05.997 ******* 2026-02-16 03:47:11.468814 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-16 03:47:11.468832 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 03:47:11.468849 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 03:47:11.468865 | orchestrator | 2026-02-16 03:47:11.468878 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-16 03:47:11.468887 | orchestrator | Monday 16 February 2026 03:47:09 +0000 (0:00:02.165) 0:00:08.163 ******* 2026-02-16 03:47:11.468897 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-16 03:47:11.468907 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-16 03:47:11.468916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-16 03:47:11.468927 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:11.468937 | orchestrator | 2026-02-16 03:47:11.468964 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-16 03:47:11.468975 | orchestrator | Monday 16 February 2026 03:47:10 +0000 (0:00:00.625) 0:00:08.788 ******* 2026-02-16 03:47:11.468986 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-16 03:47:11.469008 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-16 03:47:11.469019 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-16 03:47:11.469029 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:11.469038 | orchestrator | 2026-02-16 03:47:11.469048 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-16 03:47:11.469058 | orchestrator | Monday 16 February 2026 03:47:11 +0000 (0:00:01.057) 0:00:09.846 ******* 2026-02-16 03:47:11.469069 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:11.469088 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:11.469099 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:11.469109 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:11.469119 | orchestrator | 2026-02-16 03:47:11.469128 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-16 03:47:11.469138 | orchestrator | Monday 16 February 2026 03:47:11 +0000 (0:00:00.161) 0:00:10.008 ******* 2026-02-16 03:47:11.469172 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c4764146f42e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-16 03:47:08.127361', 'end': '2026-02-16 03:47:08.181071', 'delta': '0:00:00.053710', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c4764146f42e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-16 03:47:11.469186 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '8a5d26661ef8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-16 03:47:08.686686', 'end': '2026-02-16 03:47:08.741652', 'delta': '0:00:00.054966', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8a5d26661ef8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-16 03:47:11.469211 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6720fcec1b21', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-16 03:47:09.223496', 'end': '2026-02-16 03:47:09.279080', 'delta': '0:00:00.055584', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6720fcec1b21'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-16 03:47:18.328008 | orchestrator | 2026-02-16 03:47:18.328215 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-16 03:47:18.328248 | orchestrator | Monday 16 February 2026 03:47:11 +0000 (0:00:00.179) 0:00:10.187 ******* 2026-02-16 03:47:18.328270 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:47:18.328291 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:47:18.328310 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:47:18.328330 | orchestrator | 2026-02-16 03:47:18.328349 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-16 03:47:18.328370 | orchestrator | Monday 16 February 2026 03:47:11 +0000 (0:00:00.452) 0:00:10.640 ******* 2026-02-16 03:47:18.328392 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-16 03:47:18.328413 | orchestrator | 2026-02-16 03:47:18.328432 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-16 03:47:18.328452 | orchestrator | Monday 16 February 2026 03:47:13 +0000 (0:00:01.697) 0:00:12.338 ******* 2026-02-16 03:47:18.328472 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:18.328491 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:47:18.328510 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:47:18.328531 | orchestrator | 2026-02-16 03:47:18.328571 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-16 03:47:18.328593 | orchestrator | Monday 16 February 2026 03:47:13 +0000 (0:00:00.305) 0:00:12.644 ******* 2026-02-16 03:47:18.328641 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:18.328674 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:47:18.328695 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:47:18.328714 | orchestrator | 2026-02-16 03:47:18.328734 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-16 03:47:18.328754 | orchestrator | Monday 16 February 2026 03:47:14 +0000 (0:00:00.839) 0:00:13.483 ******* 2026-02-16 03:47:18.328767 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:18.328778 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:47:18.328794 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:47:18.328812 | orchestrator | 2026-02-16 03:47:18.328830 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-16 03:47:18.328850 | orchestrator | Monday 16 February 2026 03:47:15 +0000 (0:00:00.321) 0:00:13.805 ******* 2026-02-16 03:47:18.328869 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:47:18.328888 | orchestrator | 2026-02-16 03:47:18.328907 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-16 03:47:18.328926 | orchestrator | Monday 16 February 2026 03:47:15 +0000 (0:00:00.143) 0:00:13.949 ******* 2026-02-16 03:47:18.328946 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:18.328964 | orchestrator | 2026-02-16 03:47:18.328980 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-16 03:47:18.328997 | orchestrator | Monday 16 February 2026 03:47:15 +0000 (0:00:00.235) 0:00:14.185 ******* 2026-02-16 03:47:18.329008 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:18.329020 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:47:18.329030 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:47:18.329132 | orchestrator | 2026-02-16 03:47:18.329147 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-16 03:47:18.329158 | orchestrator | Monday 16 February 2026 03:47:15 +0000 (0:00:00.298) 0:00:14.483 ******* 2026-02-16 03:47:18.329169 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:18.329180 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:47:18.329190 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:47:18.329201 | orchestrator | 2026-02-16 03:47:18.329212 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-16 03:47:18.329223 | orchestrator | Monday 16 February 2026 03:47:16 +0000 (0:00:00.314) 0:00:14.798 ******* 2026-02-16 03:47:18.329233 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:18.329244 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:47:18.329255 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:47:18.329265 | orchestrator | 2026-02-16 03:47:18.329276 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-16 03:47:18.329287 | orchestrator | Monday 16 February 2026 03:47:16 +0000 (0:00:00.530) 0:00:15.328 ******* 2026-02-16 03:47:18.329298 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:18.329309 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:47:18.329319 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:47:18.329330 | orchestrator | 2026-02-16 03:47:18.329341 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-16 03:47:18.329351 | orchestrator | Monday 16 February 2026 03:47:16 +0000 (0:00:00.329) 0:00:15.658 ******* 2026-02-16 03:47:18.329363 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:18.329374 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:47:18.329385 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:47:18.329395 | orchestrator | 2026-02-16 03:47:18.329406 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-16 03:47:18.329417 | orchestrator | Monday 16 February 2026 03:47:17 +0000 (0:00:00.319) 0:00:15.978 ******* 2026-02-16 03:47:18.329428 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:18.329439 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:47:18.329449 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:47:18.329465 | orchestrator | 2026-02-16 03:47:18.329476 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-16 03:47:18.329488 | orchestrator | Monday 16 February 2026 03:47:17 +0000 (0:00:00.522) 0:00:16.500 ******* 2026-02-16 03:47:18.329499 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:18.329510 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:47:18.329520 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:47:18.329531 | orchestrator | 2026-02-16 03:47:18.329542 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-16 03:47:18.329553 | orchestrator | Monday 16 February 2026 03:47:18 +0000 (0:00:00.304) 0:00:16.805 ******* 2026-02-16 03:47:18.329590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f9a42f5--b575--5e11--9555--a5550e2fae1e-osd--block--2f9a42f5--b575--5e11--9555--a5550e2fae1e', 'dm-uuid-LVM-F4bqzAKmgcv4nzZjVJIDDLRdBkjdiY7Ac3eDMWCQjEFL46zd8qXZ7hWvk7L0nQAD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.329670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50d7a967--e09e--512a--aa83--aa9bbdf9ab74-osd--block--50d7a967--e09e--512a--aa83--aa9bbdf9ab74', 'dm-uuid-LVM-2dhVtclKCjfsjMcDe2D03F1qrxXtffQzYuMeigkCrxOY0hLAH1gOwaoo3bAqwsvb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.329695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.329709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.329720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.329732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.329743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.329755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d-osd--block--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d', 'dm-uuid-LVM-sWHkNGoua6AD2gtW0aHfBT1ggS3B4VVdqYYWm2N1bkS9UT0Dip02AjKcu40awaVv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.329775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.441576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3ec6a818--dc71--5cb4--ac47--83f209d09bca-osd--block--3ec6a818--dc71--5cb4--ac47--83f209d09bca', 'dm-uuid-LVM-IKNT1aRSRRXmVnhjGHBWtObOyhGZoCrKxknn5549qE5Iv1X6exAA2Hq2RDcxdb2r'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.441697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.441710 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.441718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.441726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.441753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part1', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part14', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part15', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part16', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:47:18.441764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.441782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2f9a42f5--b575--5e11--9555--a5550e2fae1e-osd--block--2f9a42f5--b575--5e11--9555--a5550e2fae1e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1ITxS0-SFz0-FdlF-VzSF-Uv8m-y10A-m0caaJ', 'scsi-0QEMU_QEMU_HARDDISK_0693774e-e893-4e7b-949f-071f2326db51', 'scsi-SQEMU_QEMU_HARDDISK_0693774e-e893-4e7b-949f-071f2326db51'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:47:18.441792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.441799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--50d7a967--e09e--512a--aa83--aa9bbdf9ab74-osd--block--50d7a967--e09e--512a--aa83--aa9bbdf9ab74'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UNvti2-beMu-mtun-nkoB-anD7-j3vD-BO56Wb', 'scsi-0QEMU_QEMU_HARDDISK_51f5f49d-415a-48de-982e-531dff143e5e', 'scsi-SQEMU_QEMU_HARDDISK_51f5f49d-415a-48de-982e-531dff143e5e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:47:18.441807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.441815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_843bc551-f5ad-4319-82ad-d411f9295fd2', 'scsi-SQEMU_QEMU_HARDDISK_843bc551-f5ad-4319-82ad-d411f9295fd2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:47:18.441829 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.565337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-16-02-25-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:47:18.565451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.565463 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.565471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part1', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part14', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part15', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part16', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:47:18.565479 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:18.565499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d-osd--block--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-W4T77R-WX0u-2wiK-0VwS-pHXw-eigq-78SyVp', 'scsi-0QEMU_QEMU_HARDDISK_769208b9-3be0-45fa-bf10-39ffe30cf829', 'scsi-SQEMU_QEMU_HARDDISK_769208b9-3be0-45fa-bf10-39ffe30cf829'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:47:18.565516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3ec6a818--dc71--5cb4--ac47--83f209d09bca-osd--block--3ec6a818--dc71--5cb4--ac47--83f209d09bca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ezeU5X-kiVi-Bwdm-EJU8-vTMX-Ty8v-7odRXz', 'scsi-0QEMU_QEMU_HARDDISK_0857a7ec-98e0-4b3a-95cc-40567a4f4a8e', 'scsi-SQEMU_QEMU_HARDDISK_0857a7ec-98e0-4b3a-95cc-40567a4f4a8e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:47:18.565522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57ea9400-2602-4802-b9b7-802a488f4705', 'scsi-SQEMU_QEMU_HARDDISK_57ea9400-2602-4802-b9b7-802a488f4705'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:47:18.565529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-16-02-25-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:47:18.565534 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:47:18.565540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--10a0662d--59e9--5a43--af5c--1b6d671b7fa5-osd--block--10a0662d--59e9--5a43--af5c--1b6d671b7fa5', 'dm-uuid-LVM-SWv31bXFKxTO3vyaMihj1WLbgzWvzkgjdSLmrZCRVKIRBOjrNick0KroaJNYuYcA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.565547 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f418f421--cc32--53ce--b421--39353fe37c02-osd--block--f418f421--cc32--53ce--b421--39353fe37c02', 'dm-uuid-LVM-fuzYkTDOD1mzGPTtEVy3HIfkbUT8vrouEUngu6j9gDpOiJ09icmXLIesmhVGIdAG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.565553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.565566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.780470 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.780597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.780611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.780620 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.780628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.780636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-16 03:47:18.780668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part1', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part14', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part15', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part16', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:47:18.780698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--10a0662d--59e9--5a43--af5c--1b6d671b7fa5-osd--block--10a0662d--59e9--5a43--af5c--1b6d671b7fa5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-z25UVR-mt7s-2TOu-f4Na-2m38-OcPQ-rSbkPq', 'scsi-0QEMU_QEMU_HARDDISK_864a7dfe-a330-4dca-9b66-49ca9e8841e5', 'scsi-SQEMU_QEMU_HARDDISK_864a7dfe-a330-4dca-9b66-49ca9e8841e5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:47:18.780709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f418f421--cc32--53ce--b421--39353fe37c02-osd--block--f418f421--cc32--53ce--b421--39353fe37c02'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Qrttlw-98AS-fQrI-yUr1-wyrI-2oj6-dafTom', 'scsi-0QEMU_QEMU_HARDDISK_560fea90-96fc-4e98-a264-4fc86723b569', 'scsi-SQEMU_QEMU_HARDDISK_560fea90-96fc-4e98-a264-4fc86723b569'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:47:18.780718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22f5929b-f2f1-4a02-b80c-ace7dc1afd6d', 'scsi-SQEMU_QEMU_HARDDISK_22f5929b-f2f1-4a02-b80c-ace7dc1afd6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:47:18.780726 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-16-02-25-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-16 03:47:18.780741 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:47:18.780751 | orchestrator | 2026-02-16 03:47:18.780759 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-16 03:47:18.780769 | orchestrator | Monday 16 February 2026 03:47:18 +0000 (0:00:00.595) 0:00:17.400 ******* 2026-02-16 03:47:18.780783 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f9a42f5--b575--5e11--9555--a5550e2fae1e-osd--block--2f9a42f5--b575--5e11--9555--a5550e2fae1e', 'dm-uuid-LVM-F4bqzAKmgcv4nzZjVJIDDLRdBkjdiY7Ac3eDMWCQjEFL46zd8qXZ7hWvk7L0nQAD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:18.889147 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50d7a967--e09e--512a--aa83--aa9bbdf9ab74-osd--block--50d7a967--e09e--512a--aa83--aa9bbdf9ab74', 'dm-uuid-LVM-2dhVtclKCjfsjMcDe2D03F1qrxXtffQzYuMeigkCrxOY0hLAH1gOwaoo3bAqwsvb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:18.889229 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:18.889240 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:18.889248 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:18.889255 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:18.889277 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:18.889303 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:18.889311 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:18.889319 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d-osd--block--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d', 'dm-uuid-LVM-sWHkNGoua6AD2gtW0aHfBT1ggS3B4VVdqYYWm2N1bkS9UT0Dip02AjKcu40awaVv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:18.889326 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:18.889333 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3ec6a818--dc71--5cb4--ac47--83f209d09bca-osd--block--3ec6a818--dc71--5cb4--ac47--83f209d09bca', 'dm-uuid-LVM-IKNT1aRSRRXmVnhjGHBWtObOyhGZoCrKxknn5549qE5Iv1X6exAA2Hq2RDcxdb2r'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:18.889388 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part1', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part14', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part15', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part16', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:18.999985 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.000198 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2f9a42f5--b575--5e11--9555--a5550e2fae1e-osd--block--2f9a42f5--b575--5e11--9555--a5550e2fae1e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1ITxS0-SFz0-FdlF-VzSF-Uv8m-y10A-m0caaJ', 'scsi-0QEMU_QEMU_HARDDISK_0693774e-e893-4e7b-949f-071f2326db51', 'scsi-SQEMU_QEMU_HARDDISK_0693774e-e893-4e7b-949f-071f2326db51'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.000270 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--50d7a967--e09e--512a--aa83--aa9bbdf9ab74-osd--block--50d7a967--e09e--512a--aa83--aa9bbdf9ab74'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UNvti2-beMu-mtun-nkoB-anD7-j3vD-BO56Wb', 'scsi-0QEMU_QEMU_HARDDISK_51f5f49d-415a-48de-982e-531dff143e5e', 'scsi-SQEMU_QEMU_HARDDISK_51f5f49d-415a-48de-982e-531dff143e5e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.000309 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.000331 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_843bc551-f5ad-4319-82ad-d411f9295fd2', 'scsi-SQEMU_QEMU_HARDDISK_843bc551-f5ad-4319-82ad-d411f9295fd2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.000375 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.000396 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-16-02-25-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.000442 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.000462 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.000489 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.000509 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.000529 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:19.000563 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.103332 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part1', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part14', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part15', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part16', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.103479 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d-osd--block--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-W4T77R-WX0u-2wiK-0VwS-pHXw-eigq-78SyVp', 'scsi-0QEMU_QEMU_HARDDISK_769208b9-3be0-45fa-bf10-39ffe30cf829', 'scsi-SQEMU_QEMU_HARDDISK_769208b9-3be0-45fa-bf10-39ffe30cf829'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.103500 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--10a0662d--59e9--5a43--af5c--1b6d671b7fa5-osd--block--10a0662d--59e9--5a43--af5c--1b6d671b7fa5', 'dm-uuid-LVM-SWv31bXFKxTO3vyaMihj1WLbgzWvzkgjdSLmrZCRVKIRBOjrNick0KroaJNYuYcA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.103547 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f418f421--cc32--53ce--b421--39353fe37c02-osd--block--f418f421--cc32--53ce--b421--39353fe37c02', 'dm-uuid-LVM-fuzYkTDOD1mzGPTtEVy3HIfkbUT8vrouEUngu6j9gDpOiJ09icmXLIesmhVGIdAG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.103581 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3ec6a818--dc71--5cb4--ac47--83f209d09bca-osd--block--3ec6a818--dc71--5cb4--ac47--83f209d09bca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ezeU5X-kiVi-Bwdm-EJU8-vTMX-Ty8v-7odRXz', 'scsi-0QEMU_QEMU_HARDDISK_0857a7ec-98e0-4b3a-95cc-40567a4f4a8e', 'scsi-SQEMU_QEMU_HARDDISK_0857a7ec-98e0-4b3a-95cc-40567a4f4a8e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.103601 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.103631 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57ea9400-2602-4802-b9b7-802a488f4705', 'scsi-SQEMU_QEMU_HARDDISK_57ea9400-2602-4802-b9b7-802a488f4705'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.103651 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.103683 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-16-02-25-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.266551 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.266656 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:47:19.266673 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.266687 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.266714 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.266727 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.266738 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.266773 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part1', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part14', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part15', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part16', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.266816 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--10a0662d--59e9--5a43--af5c--1b6d671b7fa5-osd--block--10a0662d--59e9--5a43--af5c--1b6d671b7fa5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-z25UVR-mt7s-2TOu-f4Na-2m38-OcPQ-rSbkPq', 'scsi-0QEMU_QEMU_HARDDISK_864a7dfe-a330-4dca-9b66-49ca9e8841e5', 'scsi-SQEMU_QEMU_HARDDISK_864a7dfe-a330-4dca-9b66-49ca9e8841e5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.266830 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f418f421--cc32--53ce--b421--39353fe37c02-osd--block--f418f421--cc32--53ce--b421--39353fe37c02'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Qrttlw-98AS-fQrI-yUr1-wyrI-2oj6-dafTom', 'scsi-0QEMU_QEMU_HARDDISK_560fea90-96fc-4e98-a264-4fc86723b569', 'scsi-SQEMU_QEMU_HARDDISK_560fea90-96fc-4e98-a264-4fc86723b569'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:19.266857 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22f5929b-f2f1-4a02-b80c-ace7dc1afd6d', 'scsi-SQEMU_QEMU_HARDDISK_22f5929b-f2f1-4a02-b80c-ace7dc1afd6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:30.870337 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-16-02-25-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-16 03:47:30.870436 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:47:30.870452 | orchestrator | 2026-02-16 03:47:30.870463 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-16 03:47:30.870473 | orchestrator | Monday 16 February 2026 03:47:19 +0000 (0:00:00.594) 0:00:17.995 ******* 2026-02-16 03:47:30.870482 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:47:30.870491 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:47:30.870500 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:47:30.870509 | orchestrator | 2026-02-16 03:47:30.870517 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-16 03:47:30.870526 | orchestrator | Monday 16 February 2026 03:47:20 +0000 (0:00:00.874) 0:00:18.870 ******* 2026-02-16 03:47:30.870535 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:47:30.870543 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:47:30.870552 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:47:30.870561 | orchestrator | 2026-02-16 03:47:30.870569 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-16 03:47:30.870578 | orchestrator | Monday 16 February 2026 03:47:20 +0000 (0:00:00.332) 0:00:19.202 ******* 2026-02-16 03:47:30.870586 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:47:30.870595 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:47:30.870603 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:47:30.870612 | orchestrator | 2026-02-16 03:47:30.870621 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-16 03:47:30.870630 | orchestrator | Monday 16 February 2026 03:47:21 +0000 (0:00:00.642) 0:00:19.845 ******* 2026-02-16 03:47:30.870639 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:30.870647 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:47:30.870656 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:47:30.870664 | orchestrator | 2026-02-16 03:47:30.870687 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-16 03:47:30.870697 | orchestrator | Monday 16 February 2026 03:47:21 +0000 (0:00:00.302) 0:00:20.147 ******* 2026-02-16 03:47:30.870705 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:30.870714 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:47:30.870722 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:47:30.870731 | orchestrator | 2026-02-16 03:47:30.870740 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-16 03:47:30.870749 | orchestrator | Monday 16 February 2026 03:47:22 +0000 (0:00:00.686) 0:00:20.833 ******* 2026-02-16 03:47:30.870775 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:30.870784 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:47:30.870793 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:47:30.870801 | orchestrator | 2026-02-16 03:47:30.870810 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-16 03:47:30.870819 | orchestrator | Monday 16 February 2026 03:47:22 +0000 (0:00:00.317) 0:00:21.151 ******* 2026-02-16 03:47:30.870827 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-16 03:47:30.870836 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-16 03:47:30.870845 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-16 03:47:30.870853 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-16 03:47:30.870862 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-16 03:47:30.870872 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-16 03:47:30.870905 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-16 03:47:30.870915 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-16 03:47:30.870925 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-16 03:47:30.870934 | orchestrator | 2026-02-16 03:47:30.870944 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-16 03:47:30.870954 | orchestrator | Monday 16 February 2026 03:47:23 +0000 (0:00:01.045) 0:00:22.197 ******* 2026-02-16 03:47:30.870964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-16 03:47:30.870973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-16 03:47:30.870983 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-16 03:47:30.870992 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:30.871002 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-16 03:47:30.871011 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-16 03:47:30.871021 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-16 03:47:30.871031 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:47:30.871041 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-16 03:47:30.871051 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-16 03:47:30.871061 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-16 03:47:30.871070 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:47:30.871080 | orchestrator | 2026-02-16 03:47:30.871090 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-16 03:47:30.871100 | orchestrator | Monday 16 February 2026 03:47:23 +0000 (0:00:00.351) 0:00:22.549 ******* 2026-02-16 03:47:30.871123 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:47:30.871134 | orchestrator | 2026-02-16 03:47:30.871145 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-16 03:47:30.871156 | orchestrator | Monday 16 February 2026 03:47:24 +0000 (0:00:00.709) 0:00:23.258 ******* 2026-02-16 03:47:30.871166 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:30.871176 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:47:30.871185 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:47:30.871195 | orchestrator | 2026-02-16 03:47:30.871205 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-16 03:47:30.871215 | orchestrator | Monday 16 February 2026 03:47:24 +0000 (0:00:00.319) 0:00:23.578 ******* 2026-02-16 03:47:30.871225 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:30.871234 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:47:30.871243 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:47:30.871251 | orchestrator | 2026-02-16 03:47:30.871260 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-16 03:47:30.871276 | orchestrator | Monday 16 February 2026 03:47:25 +0000 (0:00:00.321) 0:00:23.900 ******* 2026-02-16 03:47:30.871284 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:30.871293 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:47:30.871302 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:47:30.871311 | orchestrator | 2026-02-16 03:47:30.871319 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-16 03:47:30.871328 | orchestrator | Monday 16 February 2026 03:47:25 +0000 (0:00:00.510) 0:00:24.410 ******* 2026-02-16 03:47:30.871337 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:47:30.871345 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:47:30.871354 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:47:30.871363 | orchestrator | 2026-02-16 03:47:30.871371 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-16 03:47:30.871401 | orchestrator | Monday 16 February 2026 03:47:26 +0000 (0:00:00.400) 0:00:24.811 ******* 2026-02-16 03:47:30.871410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 03:47:30.871418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 03:47:30.871427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 03:47:30.871435 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:30.871444 | orchestrator | 2026-02-16 03:47:30.871452 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-16 03:47:30.871461 | orchestrator | Monday 16 February 2026 03:47:26 +0000 (0:00:00.413) 0:00:25.224 ******* 2026-02-16 03:47:30.871469 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 03:47:30.871478 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 03:47:30.871487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 03:47:30.871496 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:30.871504 | orchestrator | 2026-02-16 03:47:30.871513 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-16 03:47:30.871522 | orchestrator | Monday 16 February 2026 03:47:26 +0000 (0:00:00.412) 0:00:25.637 ******* 2026-02-16 03:47:30.871530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 03:47:30.871538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 03:47:30.871547 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 03:47:30.871555 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:47:30.871564 | orchestrator | 2026-02-16 03:47:30.871573 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-16 03:47:30.871581 | orchestrator | Monday 16 February 2026 03:47:27 +0000 (0:00:00.377) 0:00:26.014 ******* 2026-02-16 03:47:30.871590 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:47:30.871598 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:47:30.871607 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:47:30.871615 | orchestrator | 2026-02-16 03:47:30.871624 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-16 03:47:30.871632 | orchestrator | Monday 16 February 2026 03:47:27 +0000 (0:00:00.324) 0:00:26.339 ******* 2026-02-16 03:47:30.871641 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-16 03:47:30.871649 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-16 03:47:30.871658 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-16 03:47:30.871666 | orchestrator | 2026-02-16 03:47:30.871675 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-16 03:47:30.871683 | orchestrator | Monday 16 February 2026 03:47:28 +0000 (0:00:00.736) 0:00:27.075 ******* 2026-02-16 03:47:30.871692 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-16 03:47:30.871701 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 03:47:30.871709 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 03:47:30.871718 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-16 03:47:30.871732 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-16 03:47:30.871740 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-16 03:47:30.871749 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-16 03:47:30.871757 | orchestrator | 2026-02-16 03:47:30.871766 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-16 03:47:30.871775 | orchestrator | Monday 16 February 2026 03:47:29 +0000 (0:00:00.827) 0:00:27.903 ******* 2026-02-16 03:47:30.871783 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-16 03:47:30.871798 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 03:49:09.243233 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 03:49:09.243372 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-16 03:49:09.243390 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-16 03:49:09.243399 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-16 03:49:09.243448 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-16 03:49:09.243459 | orchestrator | 2026-02-16 03:49:09.243469 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-16 03:49:09.243478 | orchestrator | Monday 16 February 2026 03:47:30 +0000 (0:00:01.686) 0:00:29.589 ******* 2026-02-16 03:49:09.243487 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:49:09.243496 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:49:09.243505 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-16 03:49:09.243513 | orchestrator | 2026-02-16 03:49:09.243521 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-16 03:49:09.243530 | orchestrator | Monday 16 February 2026 03:47:31 +0000 (0:00:00.382) 0:00:29.972 ******* 2026-02-16 03:49:09.243582 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-16 03:49:09.243594 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-16 03:49:09.243603 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-16 03:49:09.243618 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-16 03:49:09.243633 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-16 03:49:09.243646 | orchestrator | 2026-02-16 03:49:09.243660 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-16 03:49:09.243673 | orchestrator | Monday 16 February 2026 03:48:16 +0000 (0:00:45.092) 0:01:15.065 ******* 2026-02-16 03:49:09.243687 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:49:09.243742 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:49:09.243751 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:49:09.243758 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:49:09.243766 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:49:09.243774 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:49:09.243782 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-16 03:49:09.243790 | orchestrator | 2026-02-16 03:49:09.243798 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-16 03:49:09.243806 | orchestrator | Monday 16 February 2026 03:48:40 +0000 (0:00:23.839) 0:01:38.904 ******* 2026-02-16 03:49:09.243813 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:49:09.243821 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:49:09.243829 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:49:09.243837 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:49:09.243844 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:49:09.243852 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:49:09.243860 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-16 03:49:09.243868 | orchestrator | 2026-02-16 03:49:09.243875 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-16 03:49:09.243883 | orchestrator | Monday 16 February 2026 03:48:51 +0000 (0:00:11.606) 0:01:50.511 ******* 2026-02-16 03:49:09.243891 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:49:09.243914 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-16 03:49:09.243923 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-16 03:49:09.243931 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:49:09.243938 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-16 03:49:09.243946 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-16 03:49:09.243955 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:49:09.243963 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-16 03:49:09.243972 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-16 03:49:09.243986 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:49:09.244000 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-16 03:49:09.244013 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-16 03:49:09.244027 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:49:09.244040 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-16 03:49:09.244054 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-16 03:49:09.244068 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-16 03:49:09.244081 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-16 03:49:09.244094 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-16 03:49:09.244104 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-16 03:49:09.244119 | orchestrator | 2026-02-16 03:49:09.244127 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:49:09.244135 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-16 03:49:09.244144 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-16 03:49:09.244157 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-16 03:49:09.244165 | orchestrator | 2026-02-16 03:49:09.244174 | orchestrator | 2026-02-16 03:49:09.244182 | orchestrator | 2026-02-16 03:49:09.244190 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:49:09.244198 | orchestrator | Monday 16 February 2026 03:49:08 +0000 (0:00:17.134) 0:02:07.645 ******* 2026-02-16 03:49:09.244206 | orchestrator | =============================================================================== 2026-02-16 03:49:09.244213 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.09s 2026-02-16 03:49:09.244221 | orchestrator | generate keys ---------------------------------------------------------- 23.84s 2026-02-16 03:49:09.244229 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.13s 2026-02-16 03:49:09.244237 | orchestrator | get keys from monitors ------------------------------------------------- 11.61s 2026-02-16 03:49:09.244245 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.17s 2026-02-16 03:49:09.244252 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.70s 2026-02-16 03:49:09.244260 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.69s 2026-02-16 03:49:09.244268 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.06s 2026-02-16 03:49:09.244276 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.05s 2026-02-16 03:49:09.244284 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.87s 2026-02-16 03:49:09.244292 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.84s 2026-02-16 03:49:09.244299 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.83s 2026-02-16 03:49:09.244307 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.82s 2026-02-16 03:49:09.244315 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.75s 2026-02-16 03:49:09.244323 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.74s 2026-02-16 03:49:09.244335 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.71s 2026-02-16 03:49:09.244348 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.69s 2026-02-16 03:49:09.244361 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.64s 2026-02-16 03:49:09.244374 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2026-02-16 03:49:09.244388 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.63s 2026-02-16 03:49:11.468109 | orchestrator | 2026-02-16 03:49:11 | INFO  | Task 1f4ae6dc-3382-4d91-bc13-f8fc7e28b493 (copy-ceph-keys) was prepared for execution. 2026-02-16 03:49:11.468203 | orchestrator | 2026-02-16 03:49:11 | INFO  | It takes a moment until task 1f4ae6dc-3382-4d91-bc13-f8fc7e28b493 (copy-ceph-keys) has been started and output is visible here. 2026-02-16 03:49:49.707460 | orchestrator | 2026-02-16 03:49:49.707580 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-16 03:49:49.707599 | orchestrator | 2026-02-16 03:49:49.707611 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-16 03:49:49.707623 | orchestrator | Monday 16 February 2026 03:49:15 +0000 (0:00:00.159) 0:00:00.159 ******* 2026-02-16 03:49:49.707656 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-16 03:49:49.707670 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-16 03:49:49.707680 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-16 03:49:49.707691 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-16 03:49:49.707702 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-16 03:49:49.707712 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-16 03:49:49.707723 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-16 03:49:49.707734 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-16 03:49:49.707744 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-16 03:49:49.707755 | orchestrator | 2026-02-16 03:49:49.707766 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-16 03:49:49.707777 | orchestrator | Monday 16 February 2026 03:49:20 +0000 (0:00:04.593) 0:00:04.753 ******* 2026-02-16 03:49:49.707787 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-16 03:49:49.707798 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-16 03:49:49.707809 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-16 03:49:49.707820 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-16 03:49:49.707846 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-16 03:49:49.707858 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-16 03:49:49.707868 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-16 03:49:49.707879 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-16 03:49:49.707889 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-16 03:49:49.707900 | orchestrator | 2026-02-16 03:49:49.707911 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-16 03:49:49.707922 | orchestrator | Monday 16 February 2026 03:49:24 +0000 (0:00:04.172) 0:00:08.925 ******* 2026-02-16 03:49:49.707933 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-16 03:49:49.707944 | orchestrator | 2026-02-16 03:49:49.707955 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-16 03:49:49.707965 | orchestrator | Monday 16 February 2026 03:49:25 +0000 (0:00:00.924) 0:00:09.849 ******* 2026-02-16 03:49:49.707976 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-16 03:49:49.707987 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-16 03:49:49.707997 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-16 03:49:49.708008 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-16 03:49:49.708019 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-16 03:49:49.708030 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-16 03:49:49.708041 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-16 03:49:49.708052 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-16 03:49:49.708069 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-16 03:49:49.708080 | orchestrator | 2026-02-16 03:49:49.708093 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-16 03:49:49.708112 | orchestrator | Monday 16 February 2026 03:49:38 +0000 (0:00:13.158) 0:00:23.008 ******* 2026-02-16 03:49:49.708134 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-16 03:49:49.708161 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-16 03:49:49.708179 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-16 03:49:49.708233 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-16 03:49:49.708350 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-16 03:49:49.708376 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-16 03:49:49.708392 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-16 03:49:49.708403 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-16 03:49:49.708413 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-16 03:49:49.708424 | orchestrator | 2026-02-16 03:49:49.708435 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-16 03:49:49.708446 | orchestrator | Monday 16 February 2026 03:49:42 +0000 (0:00:04.039) 0:00:27.047 ******* 2026-02-16 03:49:49.708457 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-16 03:49:49.708468 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-16 03:49:49.708478 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-16 03:49:49.708489 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-16 03:49:49.708500 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-16 03:49:49.708510 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-16 03:49:49.708521 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-16 03:49:49.708532 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-16 03:49:49.708542 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-16 03:49:49.708553 | orchestrator | 2026-02-16 03:49:49.708563 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:49:49.708574 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 03:49:49.708586 | orchestrator | 2026-02-16 03:49:49.708598 | orchestrator | 2026-02-16 03:49:49.708609 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:49:49.708620 | orchestrator | Monday 16 February 2026 03:49:49 +0000 (0:00:06.946) 0:00:33.994 ******* 2026-02-16 03:49:49.708638 | orchestrator | =============================================================================== 2026-02-16 03:49:49.708650 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.16s 2026-02-16 03:49:49.708660 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.95s 2026-02-16 03:49:49.708671 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.59s 2026-02-16 03:49:49.708682 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.17s 2026-02-16 03:49:49.708693 | orchestrator | Check if target directories exist --------------------------------------- 4.04s 2026-02-16 03:49:49.708713 | orchestrator | Create share directory -------------------------------------------------- 0.92s 2026-02-16 03:50:02.021077 | orchestrator | 2026-02-16 03:50:02 | INFO  | Task 60a1f0a5-62f7-4dd1-be96-730c5330875f (cephclient) was prepared for execution. 2026-02-16 03:50:02.021228 | orchestrator | 2026-02-16 03:50:02 | INFO  | It takes a moment until task 60a1f0a5-62f7-4dd1-be96-730c5330875f (cephclient) has been started and output is visible here. 2026-02-16 03:51:03.091958 | orchestrator | 2026-02-16 03:51:03.092075 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-16 03:51:03.092093 | orchestrator | 2026-02-16 03:51:03.092106 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-16 03:51:03.092118 | orchestrator | Monday 16 February 2026 03:50:06 +0000 (0:00:00.238) 0:00:00.238 ******* 2026-02-16 03:51:03.092130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-16 03:51:03.092142 | orchestrator | 2026-02-16 03:51:03.092153 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-16 03:51:03.092165 | orchestrator | Monday 16 February 2026 03:50:06 +0000 (0:00:00.246) 0:00:00.484 ******* 2026-02-16 03:51:03.092177 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-16 03:51:03.092188 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-16 03:51:03.092199 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-16 03:51:03.092211 | orchestrator | 2026-02-16 03:51:03.092222 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-16 03:51:03.092233 | orchestrator | Monday 16 February 2026 03:50:07 +0000 (0:00:01.275) 0:00:01.760 ******* 2026-02-16 03:51:03.092244 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-16 03:51:03.092256 | orchestrator | 2026-02-16 03:51:03.092267 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-16 03:51:03.092278 | orchestrator | Monday 16 February 2026 03:50:09 +0000 (0:00:01.440) 0:00:03.200 ******* 2026-02-16 03:51:03.092289 | orchestrator | changed: [testbed-manager] 2026-02-16 03:51:03.092300 | orchestrator | 2026-02-16 03:51:03.092311 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-16 03:51:03.092322 | orchestrator | Monday 16 February 2026 03:50:10 +0000 (0:00:00.960) 0:00:04.161 ******* 2026-02-16 03:51:03.092338 | orchestrator | changed: [testbed-manager] 2026-02-16 03:51:03.092356 | orchestrator | 2026-02-16 03:51:03.092375 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-16 03:51:03.092394 | orchestrator | Monday 16 February 2026 03:50:11 +0000 (0:00:00.907) 0:00:05.069 ******* 2026-02-16 03:51:03.092411 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-16 03:51:03.092428 | orchestrator | ok: [testbed-manager] 2026-02-16 03:51:03.092446 | orchestrator | 2026-02-16 03:51:03.092463 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-16 03:51:03.092482 | orchestrator | Monday 16 February 2026 03:50:53 +0000 (0:00:42.118) 0:00:47.187 ******* 2026-02-16 03:51:03.092498 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-16 03:51:03.092517 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-16 03:51:03.092535 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-16 03:51:03.092553 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-16 03:51:03.092614 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-16 03:51:03.092647 | orchestrator | 2026-02-16 03:51:03.092665 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-16 03:51:03.092684 | orchestrator | Monday 16 February 2026 03:50:57 +0000 (0:00:04.082) 0:00:51.269 ******* 2026-02-16 03:51:03.092701 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-16 03:51:03.092754 | orchestrator | 2026-02-16 03:51:03.092774 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-16 03:51:03.092793 | orchestrator | Monday 16 February 2026 03:50:57 +0000 (0:00:00.493) 0:00:51.763 ******* 2026-02-16 03:51:03.092811 | orchestrator | skipping: [testbed-manager] 2026-02-16 03:51:03.092829 | orchestrator | 2026-02-16 03:51:03.092848 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-16 03:51:03.092867 | orchestrator | Monday 16 February 2026 03:50:57 +0000 (0:00:00.128) 0:00:51.891 ******* 2026-02-16 03:51:03.092885 | orchestrator | skipping: [testbed-manager] 2026-02-16 03:51:03.092903 | orchestrator | 2026-02-16 03:51:03.092918 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-16 03:51:03.092929 | orchestrator | Monday 16 February 2026 03:50:58 +0000 (0:00:00.528) 0:00:52.420 ******* 2026-02-16 03:51:03.092939 | orchestrator | changed: [testbed-manager] 2026-02-16 03:51:03.092950 | orchestrator | 2026-02-16 03:51:03.092961 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-16 03:51:03.092972 | orchestrator | Monday 16 February 2026 03:50:59 +0000 (0:00:01.470) 0:00:53.890 ******* 2026-02-16 03:51:03.092983 | orchestrator | changed: [testbed-manager] 2026-02-16 03:51:03.092993 | orchestrator | 2026-02-16 03:51:03.093019 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-16 03:51:03.093031 | orchestrator | Monday 16 February 2026 03:51:00 +0000 (0:00:00.754) 0:00:54.644 ******* 2026-02-16 03:51:03.093041 | orchestrator | changed: [testbed-manager] 2026-02-16 03:51:03.093052 | orchestrator | 2026-02-16 03:51:03.093063 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-16 03:51:03.093074 | orchestrator | Monday 16 February 2026 03:51:01 +0000 (0:00:00.573) 0:00:55.218 ******* 2026-02-16 03:51:03.093085 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-16 03:51:03.093095 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-16 03:51:03.093106 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-16 03:51:03.093117 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-16 03:51:03.093127 | orchestrator | 2026-02-16 03:51:03.093138 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:51:03.093150 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 03:51:03.093162 | orchestrator | 2026-02-16 03:51:03.093173 | orchestrator | 2026-02-16 03:51:03.093206 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:51:03.093218 | orchestrator | Monday 16 February 2026 03:51:02 +0000 (0:00:01.489) 0:00:56.708 ******* 2026-02-16 03:51:03.093229 | orchestrator | =============================================================================== 2026-02-16 03:51:03.093240 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.12s 2026-02-16 03:51:03.093251 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.08s 2026-02-16 03:51:03.093262 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.49s 2026-02-16 03:51:03.093272 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.47s 2026-02-16 03:51:03.093283 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.44s 2026-02-16 03:51:03.093294 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.28s 2026-02-16 03:51:03.093304 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.96s 2026-02-16 03:51:03.093315 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.91s 2026-02-16 03:51:03.093326 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.75s 2026-02-16 03:51:03.093337 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.57s 2026-02-16 03:51:03.093347 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.53s 2026-02-16 03:51:03.093358 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.49s 2026-02-16 03:51:03.093378 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2026-02-16 03:51:03.093389 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-02-16 03:51:05.394764 | orchestrator | 2026-02-16 03:51:05 | INFO  | Task becdf615-8717-4626-ac96-b8c34630b846 (ceph-bootstrap-dashboard) was prepared for execution. 2026-02-16 03:51:05.394844 | orchestrator | 2026-02-16 03:51:05 | INFO  | It takes a moment until task becdf615-8717-4626-ac96-b8c34630b846 (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-02-16 03:52:27.309447 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-16 03:52:27.309581 | orchestrator | 2.16.14 2026-02-16 03:52:27.309599 | orchestrator | 2026-02-16 03:52:27.309612 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-16 03:52:27.309624 | orchestrator | 2026-02-16 03:52:27.309636 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-16 03:52:27.309647 | orchestrator | Monday 16 February 2026 03:51:09 +0000 (0:00:00.261) 0:00:00.261 ******* 2026-02-16 03:52:27.309659 | orchestrator | changed: [testbed-manager] 2026-02-16 03:52:27.309670 | orchestrator | 2026-02-16 03:52:27.309681 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-16 03:52:27.309693 | orchestrator | Monday 16 February 2026 03:51:11 +0000 (0:00:01.932) 0:00:02.194 ******* 2026-02-16 03:52:27.309704 | orchestrator | changed: [testbed-manager] 2026-02-16 03:52:27.309715 | orchestrator | 2026-02-16 03:52:27.309726 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-16 03:52:27.309737 | orchestrator | Monday 16 February 2026 03:51:12 +0000 (0:00:01.035) 0:00:03.229 ******* 2026-02-16 03:52:27.309748 | orchestrator | changed: [testbed-manager] 2026-02-16 03:52:27.309758 | orchestrator | 2026-02-16 03:52:27.309769 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-16 03:52:27.309781 | orchestrator | Monday 16 February 2026 03:51:13 +0000 (0:00:01.030) 0:00:04.260 ******* 2026-02-16 03:52:27.309791 | orchestrator | changed: [testbed-manager] 2026-02-16 03:52:27.309802 | orchestrator | 2026-02-16 03:52:27.309813 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-16 03:52:27.309824 | orchestrator | Monday 16 February 2026 03:51:14 +0000 (0:00:01.181) 0:00:05.442 ******* 2026-02-16 03:52:27.309835 | orchestrator | changed: [testbed-manager] 2026-02-16 03:52:27.309846 | orchestrator | 2026-02-16 03:52:27.309857 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-16 03:52:27.309940 | orchestrator | Monday 16 February 2026 03:51:15 +0000 (0:00:01.019) 0:00:06.462 ******* 2026-02-16 03:52:27.309954 | orchestrator | changed: [testbed-manager] 2026-02-16 03:52:27.309966 | orchestrator | 2026-02-16 03:52:27.309979 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-16 03:52:27.309991 | orchestrator | Monday 16 February 2026 03:51:16 +0000 (0:00:01.042) 0:00:07.504 ******* 2026-02-16 03:52:27.310004 | orchestrator | changed: [testbed-manager] 2026-02-16 03:52:27.310071 | orchestrator | 2026-02-16 03:52:27.310103 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-16 03:52:27.310116 | orchestrator | Monday 16 February 2026 03:51:18 +0000 (0:00:02.086) 0:00:09.591 ******* 2026-02-16 03:52:27.310128 | orchestrator | changed: [testbed-manager] 2026-02-16 03:52:27.310140 | orchestrator | 2026-02-16 03:52:27.310152 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-16 03:52:27.310164 | orchestrator | Monday 16 February 2026 03:51:20 +0000 (0:00:01.134) 0:00:10.725 ******* 2026-02-16 03:52:27.310178 | orchestrator | changed: [testbed-manager] 2026-02-16 03:52:27.310190 | orchestrator | 2026-02-16 03:52:27.310202 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-16 03:52:27.310214 | orchestrator | Monday 16 February 2026 03:52:02 +0000 (0:00:42.554) 0:00:53.280 ******* 2026-02-16 03:52:27.310249 | orchestrator | skipping: [testbed-manager] 2026-02-16 03:52:27.310262 | orchestrator | 2026-02-16 03:52:27.310275 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-16 03:52:27.310288 | orchestrator | 2026-02-16 03:52:27.310300 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-16 03:52:27.310312 | orchestrator | Monday 16 February 2026 03:52:02 +0000 (0:00:00.161) 0:00:53.441 ******* 2026-02-16 03:52:27.310323 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:52:27.310334 | orchestrator | 2026-02-16 03:52:27.310344 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-16 03:52:27.310355 | orchestrator | 2026-02-16 03:52:27.310366 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-16 03:52:27.310376 | orchestrator | Monday 16 February 2026 03:52:14 +0000 (0:00:11.801) 0:01:05.242 ******* 2026-02-16 03:52:27.310387 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:52:27.310398 | orchestrator | 2026-02-16 03:52:27.310409 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-16 03:52:27.310419 | orchestrator | 2026-02-16 03:52:27.310430 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-16 03:52:27.310441 | orchestrator | Monday 16 February 2026 03:52:15 +0000 (0:00:01.229) 0:01:06.472 ******* 2026-02-16 03:52:27.310453 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:52:27.310473 | orchestrator | 2026-02-16 03:52:27.310492 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:52:27.310513 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 03:52:27.310532 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 03:52:27.310549 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 03:52:27.310568 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 03:52:27.310585 | orchestrator | 2026-02-16 03:52:27.310605 | orchestrator | 2026-02-16 03:52:27.310624 | orchestrator | 2026-02-16 03:52:27.310642 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:52:27.310662 | orchestrator | Monday 16 February 2026 03:52:27 +0000 (0:00:11.275) 0:01:17.748 ******* 2026-02-16 03:52:27.310674 | orchestrator | =============================================================================== 2026-02-16 03:52:27.310685 | orchestrator | Create admin user ------------------------------------------------------ 42.55s 2026-02-16 03:52:27.310722 | orchestrator | Restart ceph manager service ------------------------------------------- 24.31s 2026-02-16 03:52:27.310742 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.09s 2026-02-16 03:52:27.310762 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.93s 2026-02-16 03:52:27.310780 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.18s 2026-02-16 03:52:27.310799 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.13s 2026-02-16 03:52:27.310818 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.04s 2026-02-16 03:52:27.310836 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.04s 2026-02-16 03:52:27.310851 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.03s 2026-02-16 03:52:27.310881 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.02s 2026-02-16 03:52:27.310892 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2026-02-16 03:52:27.503631 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-02-16 03:52:29.281134 | orchestrator | 2026-02-16 03:52:29 | INFO  | Task d4fdccee-5485-4782-a6ab-4b15f3aec614 (keystone) was prepared for execution. 2026-02-16 03:52:29.281211 | orchestrator | 2026-02-16 03:52:29 | INFO  | It takes a moment until task d4fdccee-5485-4782-a6ab-4b15f3aec614 (keystone) has been started and output is visible here. 2026-02-16 03:52:35.533152 | orchestrator | 2026-02-16 03:52:35.533261 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 03:52:35.533279 | orchestrator | 2026-02-16 03:52:35.533290 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 03:52:35.533300 | orchestrator | Monday 16 February 2026 03:52:32 +0000 (0:00:00.229) 0:00:00.229 ******* 2026-02-16 03:52:35.533310 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:52:35.533321 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:52:35.533330 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:52:35.533340 | orchestrator | 2026-02-16 03:52:35.533350 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 03:52:35.533373 | orchestrator | Monday 16 February 2026 03:52:33 +0000 (0:00:00.261) 0:00:00.490 ******* 2026-02-16 03:52:35.533383 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-16 03:52:35.533393 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-16 03:52:35.533403 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-16 03:52:35.533413 | orchestrator | 2026-02-16 03:52:35.533423 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-16 03:52:35.533433 | orchestrator | 2026-02-16 03:52:35.533442 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-16 03:52:35.533452 | orchestrator | Monday 16 February 2026 03:52:33 +0000 (0:00:00.366) 0:00:00.857 ******* 2026-02-16 03:52:35.533462 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:52:35.533473 | orchestrator | 2026-02-16 03:52:35.533483 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-16 03:52:35.533492 | orchestrator | Monday 16 February 2026 03:52:34 +0000 (0:00:00.497) 0:00:01.355 ******* 2026-02-16 03:52:35.533506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-16 03:52:35.533521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-16 03:52:35.533567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-16 03:52:35.533584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-16 03:52:35.533595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-16 03:52:35.533606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-16 03:52:35.533616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-16 03:52:35.533628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-16 03:52:35.533663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-16 03:52:35.533684 | orchestrator | 2026-02-16 03:52:35.533701 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-16 03:52:35.533728 | orchestrator | Monday 16 February 2026 03:52:35 +0000 (0:00:01.413) 0:00:02.769 ******* 2026-02-16 03:52:40.719747 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:52:40.719884 | orchestrator | 2026-02-16 03:52:40.719912 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-16 03:52:40.719932 | orchestrator | Monday 16 February 2026 03:52:35 +0000 (0:00:00.219) 0:00:02.988 ******* 2026-02-16 03:52:40.719950 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:52:40.719969 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:52:40.719986 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:52:40.720001 | orchestrator | 2026-02-16 03:52:40.720012 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-16 03:52:40.720034 | orchestrator | Monday 16 February 2026 03:52:35 +0000 (0:00:00.267) 0:00:03.255 ******* 2026-02-16 03:52:40.720045 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 03:52:40.720058 | orchestrator | 2026-02-16 03:52:40.720075 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-16 03:52:40.720092 | orchestrator | Monday 16 February 2026 03:52:36 +0000 (0:00:00.722) 0:00:03.977 ******* 2026-02-16 03:52:40.720109 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:52:40.720125 | orchestrator | 2026-02-16 03:52:40.720135 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-16 03:52:40.720145 | orchestrator | Monday 16 February 2026 03:52:37 +0000 (0:00:00.496) 0:00:04.474 ******* 2026-02-16 03:52:40.720160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-16 03:52:40.720175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-16 03:52:40.720206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-16 03:52:40.720248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-16 03:52:40.720270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-16 03:52:40.720288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-16 03:52:40.720307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-16 03:52:40.720334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-16 03:52:40.720346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-16 03:52:40.720357 | orchestrator | 2026-02-16 03:52:40.720368 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-16 03:52:40.720379 | orchestrator | Monday 16 February 2026 03:52:40 +0000 (0:00:03.003) 0:00:07.477 ******* 2026-02-16 03:52:40.720405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-16 03:52:41.412170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 03:52:41.412252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 03:52:41.412286 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:52:41.412302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-16 03:52:41.412315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 03:52:41.412326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 03:52:41.412336 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:52:41.412374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-16 03:52:41.412387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 03:52:41.412405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 03:52:41.412416 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:52:41.412427 | orchestrator | 2026-02-16 03:52:41.412438 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-16 03:52:41.412450 | orchestrator | Monday 16 February 2026 03:52:40 +0000 (0:00:00.489) 0:00:07.967 ******* 2026-02-16 03:52:41.412461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-16 03:52:41.412473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 03:52:41.412495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 03:52:44.516596 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:52:44.516710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-16 03:52:44.516789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 03:52:44.516805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 03:52:44.516817 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:52:44.516830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-16 03:52:44.516857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 03:52:44.516889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 03:52:44.516910 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:52:44.516922 | orchestrator | 2026-02-16 03:52:44.516935 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-16 03:52:44.516948 | orchestrator | Monday 16 February 2026 03:52:41 +0000 (0:00:00.691) 0:00:08.658 ******* 2026-02-16 03:52:44.516960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-16 03:52:44.516972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-16 03:52:44.516991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-16 03:52:44.517012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-16 03:52:48.718127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-16 03:52:48.718223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-16 03:52:48.718239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-16 03:52:48.718252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-16 03:52:48.718279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-16 03:52:48.718302 | orchestrator | 2026-02-16 03:52:48.718324 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-16 03:52:48.718345 | orchestrator | Monday 16 February 2026 03:52:44 +0000 (0:00:03.102) 0:00:11.761 ******* 2026-02-16 03:52:48.718407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-16 03:52:48.718423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 03:52:48.718435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-16 03:52:48.718447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 03:52:48.718464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-16 03:52:48.718539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 03:52:51.855022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-16 03:52:51.855125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-16 03:52:51.855139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-16 03:52:51.855151 | orchestrator | 2026-02-16 03:52:51.855163 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-16 03:52:51.855174 | orchestrator | Monday 16 February 2026 03:52:48 +0000 (0:00:04.197) 0:00:15.959 ******* 2026-02-16 03:52:51.855184 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:52:51.855195 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:52:51.855204 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:52:51.855214 | orchestrator | 2026-02-16 03:52:51.855224 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-16 03:52:51.855235 | orchestrator | Monday 16 February 2026 03:52:50 +0000 (0:00:01.300) 0:00:17.259 ******* 2026-02-16 03:52:51.855252 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:52:51.855269 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:52:51.855285 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:52:51.855301 | orchestrator | 2026-02-16 03:52:51.855317 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-16 03:52:51.855334 | orchestrator | Monday 16 February 2026 03:52:50 +0000 (0:00:00.634) 0:00:17.894 ******* 2026-02-16 03:52:51.855393 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:52:51.855405 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:52:51.855414 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:52:51.855423 | orchestrator | 2026-02-16 03:52:51.855433 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-16 03:52:51.855443 | orchestrator | Monday 16 February 2026 03:52:51 +0000 (0:00:00.400) 0:00:18.294 ******* 2026-02-16 03:52:51.855452 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:52:51.855462 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:52:51.855471 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:52:51.855480 | orchestrator | 2026-02-16 03:52:51.855504 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-16 03:52:51.855514 | orchestrator | Monday 16 February 2026 03:52:51 +0000 (0:00:00.277) 0:00:18.572 ******* 2026-02-16 03:52:51.855546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-16 03:52:51.855563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 03:52:51.855576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 03:52:51.855587 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:52:51.855600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-16 03:52:51.855635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 03:52:51.855654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 03:52:51.855672 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:52:51.855727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-16 03:53:09.052459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 03:53:09.052635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 03:53:09.052775 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:53:09.052798 | orchestrator | 2026-02-16 03:53:09.052813 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-16 03:53:09.052849 | orchestrator | Monday 16 February 2026 03:52:51 +0000 (0:00:00.525) 0:00:19.098 ******* 2026-02-16 03:53:09.052869 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:53:09.052887 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:53:09.052899 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:53:09.052910 | orchestrator | 2026-02-16 03:53:09.052921 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-16 03:53:09.052932 | orchestrator | Monday 16 February 2026 03:52:52 +0000 (0:00:00.263) 0:00:19.362 ******* 2026-02-16 03:53:09.052944 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-16 03:53:09.052956 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-16 03:53:09.052968 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-16 03:53:09.052981 | orchestrator | 2026-02-16 03:53:09.052995 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-16 03:53:09.053007 | orchestrator | Monday 16 February 2026 03:52:53 +0000 (0:00:01.536) 0:00:20.899 ******* 2026-02-16 03:53:09.053025 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 03:53:09.053044 | orchestrator | 2026-02-16 03:53:09.053065 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-16 03:53:09.053103 | orchestrator | Monday 16 February 2026 03:52:54 +0000 (0:00:00.812) 0:00:21.711 ******* 2026-02-16 03:53:09.053121 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:53:09.053139 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:53:09.053158 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:53:09.053187 | orchestrator | 2026-02-16 03:53:09.053208 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-16 03:53:09.053227 | orchestrator | Monday 16 February 2026 03:52:54 +0000 (0:00:00.486) 0:00:22.197 ******* 2026-02-16 03:53:09.053246 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-16 03:53:09.053264 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-16 03:53:09.053283 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 03:53:09.053301 | orchestrator | 2026-02-16 03:53:09.053320 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-16 03:53:09.053337 | orchestrator | Monday 16 February 2026 03:52:55 +0000 (0:00:00.875) 0:00:23.073 ******* 2026-02-16 03:53:09.053430 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:53:09.053452 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:53:09.053470 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:53:09.053482 | orchestrator | 2026-02-16 03:53:09.053493 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-16 03:53:09.053504 | orchestrator | Monday 16 February 2026 03:52:56 +0000 (0:00:00.386) 0:00:23.460 ******* 2026-02-16 03:53:09.053515 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-16 03:53:09.053526 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-16 03:53:09.053537 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-16 03:53:09.053548 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-16 03:53:09.053614 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-16 03:53:09.053626 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-16 03:53:09.053637 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-16 03:53:09.053649 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-16 03:53:09.053705 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-16 03:53:09.053726 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-16 03:53:09.053746 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-16 03:53:09.053766 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-16 03:53:09.053786 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-16 03:53:09.053807 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-16 03:53:09.053826 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-16 03:53:09.053844 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-16 03:53:09.053855 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-16 03:53:09.053866 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-16 03:53:09.053877 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-16 03:53:09.053888 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-16 03:53:09.053899 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-16 03:53:09.053910 | orchestrator | 2026-02-16 03:53:09.053976 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-16 03:53:09.053989 | orchestrator | Monday 16 February 2026 03:53:04 +0000 (0:00:08.081) 0:00:31.542 ******* 2026-02-16 03:53:09.054000 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-16 03:53:09.054011 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-16 03:53:09.054126 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-16 03:53:09.054149 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-16 03:53:09.054169 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-16 03:53:09.054189 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-16 03:53:09.054208 | orchestrator | 2026-02-16 03:53:09.054222 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-16 03:53:09.054233 | orchestrator | Monday 16 February 2026 03:53:06 +0000 (0:00:02.496) 0:00:34.038 ******* 2026-02-16 03:53:09.054258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-16 03:53:09.054288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-16 03:54:45.821797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-16 03:54:45.821881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-16 03:54:45.821905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-16 03:54:45.821913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-16 03:54:45.821966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-16 03:54:45.822005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-16 03:54:45.822050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-16 03:54:45.822057 | orchestrator | 2026-02-16 03:54:45.822062 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-16 03:54:45.822067 | orchestrator | Monday 16 February 2026 03:53:09 +0000 (0:00:02.254) 0:00:36.292 ******* 2026-02-16 03:54:45.822071 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:54:45.822076 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:54:45.822080 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:54:45.822084 | orchestrator | 2026-02-16 03:54:45.822088 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-16 03:54:45.822092 | orchestrator | Monday 16 February 2026 03:53:09 +0000 (0:00:00.461) 0:00:36.754 ******* 2026-02-16 03:54:45.822096 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:54:45.822099 | orchestrator | 2026-02-16 03:54:45.822103 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-16 03:54:45.822107 | orchestrator | Monday 16 February 2026 03:53:11 +0000 (0:00:02.273) 0:00:39.028 ******* 2026-02-16 03:54:45.822111 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:54:45.822114 | orchestrator | 2026-02-16 03:54:45.822118 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-16 03:54:45.822122 | orchestrator | Monday 16 February 2026 03:53:14 +0000 (0:00:02.294) 0:00:41.322 ******* 2026-02-16 03:54:45.822126 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:54:45.822130 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:54:45.822133 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:54:45.822137 | orchestrator | 2026-02-16 03:54:45.822141 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-16 03:54:45.822145 | orchestrator | Monday 16 February 2026 03:53:14 +0000 (0:00:00.815) 0:00:42.138 ******* 2026-02-16 03:54:45.822148 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:54:45.822152 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:54:45.822156 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:54:45.822159 | orchestrator | 2026-02-16 03:54:45.822163 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-16 03:54:45.822168 | orchestrator | Monday 16 February 2026 03:53:15 +0000 (0:00:00.323) 0:00:42.461 ******* 2026-02-16 03:54:45.822176 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:54:45.822180 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:54:45.822184 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:54:45.822188 | orchestrator | 2026-02-16 03:54:45.822192 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-16 03:54:45.822200 | orchestrator | Monday 16 February 2026 03:53:15 +0000 (0:00:00.520) 0:00:42.981 ******* 2026-02-16 03:54:45.822204 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:54:45.822207 | orchestrator | 2026-02-16 03:54:45.822211 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-16 03:54:45.822215 | orchestrator | Monday 16 February 2026 03:53:30 +0000 (0:00:14.465) 0:00:57.447 ******* 2026-02-16 03:54:45.822219 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:54:45.822222 | orchestrator | 2026-02-16 03:54:45.822226 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-16 03:54:45.822230 | orchestrator | Monday 16 February 2026 03:53:40 +0000 (0:00:10.267) 0:01:07.715 ******* 2026-02-16 03:54:45.822234 | orchestrator | 2026-02-16 03:54:45.822237 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-16 03:54:45.822241 | orchestrator | Monday 16 February 2026 03:53:40 +0000 (0:00:00.063) 0:01:07.779 ******* 2026-02-16 03:54:45.822245 | orchestrator | 2026-02-16 03:54:45.822249 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-16 03:54:45.822253 | orchestrator | Monday 16 February 2026 03:53:40 +0000 (0:00:00.063) 0:01:07.843 ******* 2026-02-16 03:54:45.822256 | orchestrator | 2026-02-16 03:54:45.822260 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-16 03:54:45.822264 | orchestrator | Monday 16 February 2026 03:53:40 +0000 (0:00:00.065) 0:01:07.908 ******* 2026-02-16 03:54:45.822267 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:54:45.822271 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:54:45.822275 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:54:45.822279 | orchestrator | 2026-02-16 03:54:45.822282 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-16 03:54:45.822286 | orchestrator | Monday 16 February 2026 03:54:27 +0000 (0:00:47.106) 0:01:55.015 ******* 2026-02-16 03:54:45.822290 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:54:45.822293 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:54:45.822297 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:54:45.822301 | orchestrator | 2026-02-16 03:54:45.822305 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-16 03:54:45.822308 | orchestrator | Monday 16 February 2026 03:54:33 +0000 (0:00:05.361) 0:02:00.376 ******* 2026-02-16 03:54:45.822312 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:54:45.822316 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:54:45.822319 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:54:45.822333 | orchestrator | 2026-02-16 03:54:45.822337 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-16 03:54:45.822347 | orchestrator | Monday 16 February 2026 03:54:45 +0000 (0:00:12.147) 0:02:12.523 ******* 2026-02-16 03:54:45.822355 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:55:36.502236 | orchestrator | 2026-02-16 03:55:36.502376 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-16 03:55:36.502405 | orchestrator | Monday 16 February 2026 03:54:45 +0000 (0:00:00.541) 0:02:13.065 ******* 2026-02-16 03:55:36.502425 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:55:36.502444 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:55:36.502463 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:55:36.502482 | orchestrator | 2026-02-16 03:55:36.502502 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-16 03:55:36.502522 | orchestrator | Monday 16 February 2026 03:54:46 +0000 (0:00:01.086) 0:02:14.151 ******* 2026-02-16 03:55:36.502542 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:55:36.502587 | orchestrator | 2026-02-16 03:55:36.502601 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-16 03:55:36.502612 | orchestrator | Monday 16 February 2026 03:54:48 +0000 (0:00:01.760) 0:02:15.912 ******* 2026-02-16 03:55:36.502669 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-16 03:55:36.502683 | orchestrator | 2026-02-16 03:55:36.502695 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-16 03:55:36.502706 | orchestrator | Monday 16 February 2026 03:55:00 +0000 (0:00:11.673) 0:02:27.585 ******* 2026-02-16 03:55:36.502717 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-16 03:55:36.502728 | orchestrator | 2026-02-16 03:55:36.502739 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-16 03:55:36.502750 | orchestrator | Monday 16 February 2026 03:55:24 +0000 (0:00:24.284) 0:02:51.870 ******* 2026-02-16 03:55:36.502761 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-16 03:55:36.502776 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-16 03:55:36.502797 | orchestrator | 2026-02-16 03:55:36.502819 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-16 03:55:36.502841 | orchestrator | Monday 16 February 2026 03:55:31 +0000 (0:00:06.847) 0:02:58.717 ******* 2026-02-16 03:55:36.502861 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:55:36.502875 | orchestrator | 2026-02-16 03:55:36.502888 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-16 03:55:36.502901 | orchestrator | Monday 16 February 2026 03:55:31 +0000 (0:00:00.163) 0:02:58.880 ******* 2026-02-16 03:55:36.502913 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:55:36.502926 | orchestrator | 2026-02-16 03:55:36.502940 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-16 03:55:36.502960 | orchestrator | Monday 16 February 2026 03:55:31 +0000 (0:00:00.115) 0:02:58.996 ******* 2026-02-16 03:55:36.502980 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:55:36.503000 | orchestrator | 2026-02-16 03:55:36.503019 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-16 03:55:36.503039 | orchestrator | Monday 16 February 2026 03:55:31 +0000 (0:00:00.120) 0:02:59.117 ******* 2026-02-16 03:55:36.503059 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:55:36.503079 | orchestrator | 2026-02-16 03:55:36.503098 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-16 03:55:36.503137 | orchestrator | Monday 16 February 2026 03:55:32 +0000 (0:00:00.551) 0:02:59.668 ******* 2026-02-16 03:55:36.503156 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:55:36.503176 | orchestrator | 2026-02-16 03:55:36.503195 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-16 03:55:36.503207 | orchestrator | Monday 16 February 2026 03:55:35 +0000 (0:00:03.272) 0:03:02.941 ******* 2026-02-16 03:55:36.503218 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:55:36.503228 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:55:36.503239 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:55:36.503250 | orchestrator | 2026-02-16 03:55:36.503261 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:55:36.503273 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-16 03:55:36.503286 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-16 03:55:36.503296 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-16 03:55:36.503307 | orchestrator | 2026-02-16 03:55:36.503318 | orchestrator | 2026-02-16 03:55:36.503329 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:55:36.503349 | orchestrator | Monday 16 February 2026 03:55:36 +0000 (0:00:00.449) 0:03:03.391 ******* 2026-02-16 03:55:36.503360 | orchestrator | =============================================================================== 2026-02-16 03:55:36.503371 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 47.11s 2026-02-16 03:55:36.503382 | orchestrator | service-ks-register : keystone | Creating services --------------------- 24.28s 2026-02-16 03:55:36.503393 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.47s 2026-02-16 03:55:36.503404 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.15s 2026-02-16 03:55:36.503415 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.67s 2026-02-16 03:55:36.503426 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.27s 2026-02-16 03:55:36.503436 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.08s 2026-02-16 03:55:36.503447 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.85s 2026-02-16 03:55:36.503458 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.36s 2026-02-16 03:55:36.503489 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.20s 2026-02-16 03:55:36.503500 | orchestrator | keystone : Creating default user role ----------------------------------- 3.27s 2026-02-16 03:55:36.503511 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.10s 2026-02-16 03:55:36.503522 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.00s 2026-02-16 03:55:36.503533 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.50s 2026-02-16 03:55:36.503543 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.29s 2026-02-16 03:55:36.503554 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.27s 2026-02-16 03:55:36.503565 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.25s 2026-02-16 03:55:36.503576 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.76s 2026-02-16 03:55:36.503587 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.54s 2026-02-16 03:55:36.503598 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.41s 2026-02-16 03:55:38.901815 | orchestrator | 2026-02-16 03:55:38 | INFO  | Task 92d337aa-47d7-458e-af45-cb4708889a23 (placement) was prepared for execution. 2026-02-16 03:55:38.901926 | orchestrator | 2026-02-16 03:55:38 | INFO  | It takes a moment until task 92d337aa-47d7-458e-af45-cb4708889a23 (placement) has been started and output is visible here. 2026-02-16 03:56:13.403845 | orchestrator | 2026-02-16 03:56:13.403952 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 03:56:13.403970 | orchestrator | 2026-02-16 03:56:13.403982 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 03:56:13.403994 | orchestrator | Monday 16 February 2026 03:55:42 +0000 (0:00:00.254) 0:00:00.254 ******* 2026-02-16 03:56:13.404005 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:56:13.404017 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:56:13.404028 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:56:13.404039 | orchestrator | 2026-02-16 03:56:13.404050 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 03:56:13.404062 | orchestrator | Monday 16 February 2026 03:55:43 +0000 (0:00:00.291) 0:00:00.546 ******* 2026-02-16 03:56:13.404074 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-16 03:56:13.404085 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-16 03:56:13.404096 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-16 03:56:13.404107 | orchestrator | 2026-02-16 03:56:13.404118 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-16 03:56:13.404129 | orchestrator | 2026-02-16 03:56:13.404140 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-16 03:56:13.404173 | orchestrator | Monday 16 February 2026 03:55:43 +0000 (0:00:00.435) 0:00:00.982 ******* 2026-02-16 03:56:13.404204 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:56:13.404217 | orchestrator | 2026-02-16 03:56:13.404228 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-16 03:56:13.404238 | orchestrator | Monday 16 February 2026 03:55:44 +0000 (0:00:00.547) 0:00:01.529 ******* 2026-02-16 03:56:13.404249 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-16 03:56:13.404260 | orchestrator | 2026-02-16 03:56:13.404271 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-16 03:56:13.404281 | orchestrator | Monday 16 February 2026 03:55:48 +0000 (0:00:03.904) 0:00:05.434 ******* 2026-02-16 03:56:13.404292 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-16 03:56:13.404303 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-16 03:56:13.404314 | orchestrator | 2026-02-16 03:56:13.404325 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-16 03:56:13.404336 | orchestrator | Monday 16 February 2026 03:55:54 +0000 (0:00:06.729) 0:00:12.164 ******* 2026-02-16 03:56:13.404347 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-16 03:56:13.404358 | orchestrator | 2026-02-16 03:56:13.404368 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-16 03:56:13.404379 | orchestrator | Monday 16 February 2026 03:55:58 +0000 (0:00:03.673) 0:00:15.838 ******* 2026-02-16 03:56:13.404390 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-16 03:56:13.404401 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-16 03:56:13.404414 | orchestrator | 2026-02-16 03:56:13.404451 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-16 03:56:13.404464 | orchestrator | Monday 16 February 2026 03:56:02 +0000 (0:00:04.058) 0:00:19.896 ******* 2026-02-16 03:56:13.404477 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-16 03:56:13.404490 | orchestrator | 2026-02-16 03:56:13.404502 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-16 03:56:13.404514 | orchestrator | Monday 16 February 2026 03:56:05 +0000 (0:00:03.231) 0:00:23.128 ******* 2026-02-16 03:56:13.404526 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-16 03:56:13.404539 | orchestrator | 2026-02-16 03:56:13.404552 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-16 03:56:13.404564 | orchestrator | Monday 16 February 2026 03:56:09 +0000 (0:00:03.727) 0:00:26.855 ******* 2026-02-16 03:56:13.404577 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:56:13.404589 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:56:13.404601 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:56:13.404612 | orchestrator | 2026-02-16 03:56:13.404623 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-16 03:56:13.404634 | orchestrator | Monday 16 February 2026 03:56:09 +0000 (0:00:00.283) 0:00:27.138 ******* 2026-02-16 03:56:13.404648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-16 03:56:13.404690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-16 03:56:13.404710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-16 03:56:13.404722 | orchestrator | 2026-02-16 03:56:13.404733 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-16 03:56:13.404745 | orchestrator | Monday 16 February 2026 03:56:10 +0000 (0:00:00.838) 0:00:27.977 ******* 2026-02-16 03:56:13.404756 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:56:13.404767 | orchestrator | 2026-02-16 03:56:13.404778 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-16 03:56:13.404789 | orchestrator | Monday 16 February 2026 03:56:11 +0000 (0:00:00.317) 0:00:28.295 ******* 2026-02-16 03:56:13.404800 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:56:13.404811 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:56:13.404821 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:56:13.404832 | orchestrator | 2026-02-16 03:56:13.404843 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-16 03:56:13.404854 | orchestrator | Monday 16 February 2026 03:56:11 +0000 (0:00:00.305) 0:00:28.600 ******* 2026-02-16 03:56:13.404865 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 03:56:13.404876 | orchestrator | 2026-02-16 03:56:13.404887 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-16 03:56:13.404898 | orchestrator | Monday 16 February 2026 03:56:11 +0000 (0:00:00.517) 0:00:29.118 ******* 2026-02-16 03:56:13.404910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-16 03:56:13.404937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-16 03:56:16.121709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-16 03:56:16.121863 | orchestrator | 2026-02-16 03:56:16.121894 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-16 03:56:16.121915 | orchestrator | Monday 16 February 2026 03:56:13 +0000 (0:00:01.535) 0:00:30.654 ******* 2026-02-16 03:56:16.121930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-16 03:56:16.122007 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:56:16.122079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-16 03:56:16.122143 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:56:16.122164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-16 03:56:16.122183 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:56:16.122202 | orchestrator | 2026-02-16 03:56:16.122222 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-16 03:56:16.122266 | orchestrator | Monday 16 February 2026 03:56:13 +0000 (0:00:00.493) 0:00:31.148 ******* 2026-02-16 03:56:16.122296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-16 03:56:16.122310 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:56:16.122323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-16 03:56:16.122335 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:56:16.122349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-16 03:56:16.122372 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:56:16.122385 | orchestrator | 2026-02-16 03:56:16.122398 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-16 03:56:16.122410 | orchestrator | Monday 16 February 2026 03:56:14 +0000 (0:00:00.693) 0:00:31.841 ******* 2026-02-16 03:56:16.122455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-16 03:56:16.122483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-16 03:56:22.912626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-16 03:56:22.912758 | orchestrator | 2026-02-16 03:56:22.912786 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-16 03:56:22.912805 | orchestrator | Monday 16 February 2026 03:56:16 +0000 (0:00:01.535) 0:00:33.377 ******* 2026-02-16 03:56:22.912856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-16 03:56:22.912936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-16 03:56:22.912959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-16 03:56:22.912976 | orchestrator | 2026-02-16 03:56:22.913047 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-16 03:56:22.913078 | orchestrator | Monday 16 February 2026 03:56:18 +0000 (0:00:02.298) 0:00:35.675 ******* 2026-02-16 03:56:22.913117 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-16 03:56:22.913132 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-16 03:56:22.913144 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-16 03:56:22.913154 | orchestrator | 2026-02-16 03:56:22.913163 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-16 03:56:22.913172 | orchestrator | Monday 16 February 2026 03:56:19 +0000 (0:00:01.406) 0:00:37.082 ******* 2026-02-16 03:56:22.913181 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:56:22.913191 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:56:22.913200 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:56:22.913209 | orchestrator | 2026-02-16 03:56:22.913218 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-16 03:56:22.913227 | orchestrator | Monday 16 February 2026 03:56:21 +0000 (0:00:01.306) 0:00:38.388 ******* 2026-02-16 03:56:22.913250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-16 03:56:22.913260 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:56:22.913269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-16 03:56:22.913278 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:56:22.913288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-16 03:56:22.913297 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:56:22.913306 | orchestrator | 2026-02-16 03:56:22.913315 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-16 03:56:22.913324 | orchestrator | Monday 16 February 2026 03:56:21 +0000 (0:00:00.747) 0:00:39.135 ******* 2026-02-16 03:56:22.913346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-16 03:56:52.084191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-16 03:56:52.084395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-16 03:56:52.084429 | orchestrator | 2026-02-16 03:56:52.084452 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-16 03:56:52.084472 | orchestrator | Monday 16 February 2026 03:56:22 +0000 (0:00:01.032) 0:00:40.168 ******* 2026-02-16 03:56:52.084492 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:56:52.084513 | orchestrator | 2026-02-16 03:56:52.084533 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-16 03:56:52.084554 | orchestrator | Monday 16 February 2026 03:56:24 +0000 (0:00:02.093) 0:00:42.262 ******* 2026-02-16 03:56:52.084575 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:56:52.084591 | orchestrator | 2026-02-16 03:56:52.084602 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-16 03:56:52.084613 | orchestrator | Monday 16 February 2026 03:56:27 +0000 (0:00:02.202) 0:00:44.465 ******* 2026-02-16 03:56:52.084624 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:56:52.084636 | orchestrator | 2026-02-16 03:56:52.084647 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-16 03:56:52.084658 | orchestrator | Monday 16 February 2026 03:56:41 +0000 (0:00:13.932) 0:00:58.397 ******* 2026-02-16 03:56:52.084668 | orchestrator | 2026-02-16 03:56:52.084680 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-16 03:56:52.084690 | orchestrator | Monday 16 February 2026 03:56:41 +0000 (0:00:00.085) 0:00:58.483 ******* 2026-02-16 03:56:52.084701 | orchestrator | 2026-02-16 03:56:52.084712 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-16 03:56:52.084723 | orchestrator | Monday 16 February 2026 03:56:41 +0000 (0:00:00.067) 0:00:58.550 ******* 2026-02-16 03:56:52.084736 | orchestrator | 2026-02-16 03:56:52.084751 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-16 03:56:52.084770 | orchestrator | Monday 16 February 2026 03:56:41 +0000 (0:00:00.069) 0:00:58.619 ******* 2026-02-16 03:56:52.084789 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:56:52.084807 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:56:52.084861 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:56:52.084884 | orchestrator | 2026-02-16 03:56:52.084938 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 03:56:52.084980 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 03:56:52.085002 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-16 03:56:52.085022 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-16 03:56:52.085040 | orchestrator | 2026-02-16 03:56:52.085055 | orchestrator | 2026-02-16 03:56:52.085069 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 03:56:52.085080 | orchestrator | Monday 16 February 2026 03:56:51 +0000 (0:00:10.405) 0:01:09.024 ******* 2026-02-16 03:56:52.085091 | orchestrator | =============================================================================== 2026-02-16 03:56:52.085117 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.93s 2026-02-16 03:56:52.085161 | orchestrator | placement : Restart placement-api container ---------------------------- 10.41s 2026-02-16 03:56:52.085174 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.73s 2026-02-16 03:56:52.085193 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.06s 2026-02-16 03:56:52.085221 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.90s 2026-02-16 03:56:52.085268 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.73s 2026-02-16 03:56:52.085287 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.67s 2026-02-16 03:56:52.085306 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.23s 2026-02-16 03:56:52.085323 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.30s 2026-02-16 03:56:52.085340 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.20s 2026-02-16 03:56:52.085355 | orchestrator | placement : Creating placement databases -------------------------------- 2.09s 2026-02-16 03:56:52.085370 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.54s 2026-02-16 03:56:52.085386 | orchestrator | placement : Copying over config.json files for services ----------------- 1.54s 2026-02-16 03:56:52.085403 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.41s 2026-02-16 03:56:52.085418 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.31s 2026-02-16 03:56:52.085435 | orchestrator | placement : Check placement containers ---------------------------------- 1.03s 2026-02-16 03:56:52.085452 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.84s 2026-02-16 03:56:52.085468 | orchestrator | placement : Copying over existing policy file --------------------------- 0.75s 2026-02-16 03:56:52.085483 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.69s 2026-02-16 03:56:52.085499 | orchestrator | placement : include_tasks ----------------------------------------------- 0.55s 2026-02-16 03:56:54.322501 | orchestrator | 2026-02-16 03:56:54 | INFO  | Task 6c8758ec-983e-4569-9da5-02f850edf2ea (neutron) was prepared for execution. 2026-02-16 03:56:54.322600 | orchestrator | 2026-02-16 03:56:54 | INFO  | It takes a moment until task 6c8758ec-983e-4569-9da5-02f850edf2ea (neutron) has been started and output is visible here. 2026-02-16 03:57:42.302581 | orchestrator | 2026-02-16 03:57:42.302667 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 03:57:42.302680 | orchestrator | 2026-02-16 03:57:42.302688 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 03:57:42.302694 | orchestrator | Monday 16 February 2026 03:56:58 +0000 (0:00:00.257) 0:00:00.257 ******* 2026-02-16 03:57:42.302714 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:57:42.302720 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:57:42.302725 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:57:42.302729 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:57:42.302735 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:57:42.302756 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:57:42.302763 | orchestrator | 2026-02-16 03:57:42.302771 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 03:57:42.302779 | orchestrator | Monday 16 February 2026 03:56:59 +0000 (0:00:00.695) 0:00:00.953 ******* 2026-02-16 03:57:42.302794 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-16 03:57:42.302803 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-16 03:57:42.302810 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-16 03:57:42.302817 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-16 03:57:42.302825 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-16 03:57:42.302830 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-16 03:57:42.302835 | orchestrator | 2026-02-16 03:57:42.302840 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-16 03:57:42.302844 | orchestrator | 2026-02-16 03:57:42.302849 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-16 03:57:42.302854 | orchestrator | Monday 16 February 2026 03:56:59 +0000 (0:00:00.614) 0:00:01.567 ******* 2026-02-16 03:57:42.302859 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:57:42.302865 | orchestrator | 2026-02-16 03:57:42.302869 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-16 03:57:42.302874 | orchestrator | Monday 16 February 2026 03:57:01 +0000 (0:00:01.222) 0:00:02.790 ******* 2026-02-16 03:57:42.302879 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:57:42.302894 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:57:42.302899 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:57:42.302903 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:57:42.302908 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:57:42.302912 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:57:42.302917 | orchestrator | 2026-02-16 03:57:42.302921 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-16 03:57:42.302926 | orchestrator | Monday 16 February 2026 03:57:02 +0000 (0:00:01.263) 0:00:04.054 ******* 2026-02-16 03:57:42.302931 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:57:42.302935 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:57:42.302940 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:57:42.302944 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:57:42.302949 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:57:42.302953 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:57:42.302957 | orchestrator | 2026-02-16 03:57:42.302962 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-16 03:57:42.302966 | orchestrator | Monday 16 February 2026 03:57:03 +0000 (0:00:01.050) 0:00:05.104 ******* 2026-02-16 03:57:42.302971 | orchestrator | ok: [testbed-node-0] => { 2026-02-16 03:57:42.302976 | orchestrator |  "changed": false, 2026-02-16 03:57:42.302981 | orchestrator |  "msg": "All assertions passed" 2026-02-16 03:57:42.302986 | orchestrator | } 2026-02-16 03:57:42.303038 | orchestrator | ok: [testbed-node-1] => { 2026-02-16 03:57:42.303045 | orchestrator |  "changed": false, 2026-02-16 03:57:42.303052 | orchestrator |  "msg": "All assertions passed" 2026-02-16 03:57:42.303060 | orchestrator | } 2026-02-16 03:57:42.303068 | orchestrator | ok: [testbed-node-2] => { 2026-02-16 03:57:42.303075 | orchestrator |  "changed": false, 2026-02-16 03:57:42.303082 | orchestrator |  "msg": "All assertions passed" 2026-02-16 03:57:42.303090 | orchestrator | } 2026-02-16 03:57:42.303097 | orchestrator | ok: [testbed-node-3] => { 2026-02-16 03:57:42.303104 | orchestrator |  "changed": false, 2026-02-16 03:57:42.303119 | orchestrator |  "msg": "All assertions passed" 2026-02-16 03:57:42.303127 | orchestrator | } 2026-02-16 03:57:42.303134 | orchestrator | ok: [testbed-node-4] => { 2026-02-16 03:57:42.303142 | orchestrator |  "changed": false, 2026-02-16 03:57:42.303148 | orchestrator |  "msg": "All assertions passed" 2026-02-16 03:57:42.303153 | orchestrator | } 2026-02-16 03:57:42.303159 | orchestrator | ok: [testbed-node-5] => { 2026-02-16 03:57:42.303165 | orchestrator |  "changed": false, 2026-02-16 03:57:42.303171 | orchestrator |  "msg": "All assertions passed" 2026-02-16 03:57:42.303177 | orchestrator | } 2026-02-16 03:57:42.303182 | orchestrator | 2026-02-16 03:57:42.303188 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-16 03:57:42.303194 | orchestrator | Monday 16 February 2026 03:57:04 +0000 (0:00:00.796) 0:00:05.900 ******* 2026-02-16 03:57:42.303199 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:57:42.303205 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:57:42.303211 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:57:42.303217 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:57:42.303222 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:57:42.303228 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:57:42.303233 | orchestrator | 2026-02-16 03:57:42.303239 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-16 03:57:42.303245 | orchestrator | Monday 16 February 2026 03:57:04 +0000 (0:00:00.566) 0:00:06.467 ******* 2026-02-16 03:57:42.303251 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-16 03:57:42.303256 | orchestrator | 2026-02-16 03:57:42.303262 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-16 03:57:42.303268 | orchestrator | Monday 16 February 2026 03:57:08 +0000 (0:00:03.870) 0:00:10.338 ******* 2026-02-16 03:57:42.303273 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-16 03:57:42.303280 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-16 03:57:42.303286 | orchestrator | 2026-02-16 03:57:42.303305 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-16 03:57:42.303311 | orchestrator | Monday 16 February 2026 03:57:15 +0000 (0:00:06.554) 0:00:16.893 ******* 2026-02-16 03:57:42.303317 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-16 03:57:42.303322 | orchestrator | 2026-02-16 03:57:42.303328 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-16 03:57:42.303334 | orchestrator | Monday 16 February 2026 03:57:18 +0000 (0:00:03.211) 0:00:20.105 ******* 2026-02-16 03:57:42.303340 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-16 03:57:42.303345 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-16 03:57:42.303351 | orchestrator | 2026-02-16 03:57:42.303357 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-16 03:57:42.303362 | orchestrator | Monday 16 February 2026 03:57:22 +0000 (0:00:03.869) 0:00:23.974 ******* 2026-02-16 03:57:42.303368 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-16 03:57:42.303374 | orchestrator | 2026-02-16 03:57:42.303380 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-16 03:57:42.303386 | orchestrator | Monday 16 February 2026 03:57:25 +0000 (0:00:03.255) 0:00:27.230 ******* 2026-02-16 03:57:42.303392 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-16 03:57:42.303397 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-16 03:57:42.303403 | orchestrator | 2026-02-16 03:57:42.303409 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-16 03:57:42.303415 | orchestrator | Monday 16 February 2026 03:57:33 +0000 (0:00:08.218) 0:00:35.448 ******* 2026-02-16 03:57:42.303421 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:57:42.303427 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:57:42.303437 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:57:42.303446 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:57:42.303455 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:57:42.303464 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:57:42.303472 | orchestrator | 2026-02-16 03:57:42.303482 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-16 03:57:42.303488 | orchestrator | Monday 16 February 2026 03:57:34 +0000 (0:00:00.774) 0:00:36.223 ******* 2026-02-16 03:57:42.303493 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:57:42.303499 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:57:42.303509 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:57:42.303515 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:57:42.303521 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:57:42.303529 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:57:42.303538 | orchestrator | 2026-02-16 03:57:42.303546 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-16 03:57:42.303554 | orchestrator | Monday 16 February 2026 03:57:36 +0000 (0:00:02.025) 0:00:38.249 ******* 2026-02-16 03:57:42.303563 | orchestrator | ok: [testbed-node-0] 2026-02-16 03:57:42.303572 | orchestrator | ok: [testbed-node-1] 2026-02-16 03:57:42.303581 | orchestrator | ok: [testbed-node-2] 2026-02-16 03:57:42.303588 | orchestrator | ok: [testbed-node-3] 2026-02-16 03:57:42.303593 | orchestrator | ok: [testbed-node-4] 2026-02-16 03:57:42.303598 | orchestrator | ok: [testbed-node-5] 2026-02-16 03:57:42.303603 | orchestrator | 2026-02-16 03:57:42.303608 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-16 03:57:42.303613 | orchestrator | Monday 16 February 2026 03:57:37 +0000 (0:00:01.139) 0:00:39.388 ******* 2026-02-16 03:57:42.303618 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:57:42.303623 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:57:42.303628 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:57:42.303633 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:57:42.303638 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:57:42.303646 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:57:42.303653 | orchestrator | 2026-02-16 03:57:42.303662 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-16 03:57:42.303671 | orchestrator | Monday 16 February 2026 03:57:39 +0000 (0:00:02.247) 0:00:41.636 ******* 2026-02-16 03:57:42.303683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:57:42.303700 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-16 03:57:47.549356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:57:47.549461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:57:47.549471 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-16 03:57:47.549480 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-16 03:57:47.549488 | orchestrator | 2026-02-16 03:57:47.549496 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-16 03:57:47.549504 | orchestrator | Monday 16 February 2026 03:57:42 +0000 (0:00:02.406) 0:00:44.043 ******* 2026-02-16 03:57:47.549523 | orchestrator | [WARNING]: Skipped 2026-02-16 03:57:47.549531 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-16 03:57:47.549546 | orchestrator | due to this access issue: 2026-02-16 03:57:47.549553 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-16 03:57:47.549576 | orchestrator | a directory 2026-02-16 03:57:47.549583 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 03:57:47.549589 | orchestrator | 2026-02-16 03:57:47.549596 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-16 03:57:47.549602 | orchestrator | Monday 16 February 2026 03:57:43 +0000 (0:00:00.819) 0:00:44.862 ******* 2026-02-16 03:57:47.549621 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 03:57:47.549629 | orchestrator | 2026-02-16 03:57:47.549635 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-16 03:57:47.549641 | orchestrator | Monday 16 February 2026 03:57:44 +0000 (0:00:01.221) 0:00:46.084 ******* 2026-02-16 03:57:47.549648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:57:47.549658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:57:47.549665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:57:47.549672 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-16 03:57:47.549689 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-16 03:57:51.963828 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-16 03:57:51.964032 | orchestrator | 2026-02-16 03:57:51.964065 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-16 03:57:51.964086 | orchestrator | Monday 16 February 2026 03:57:47 +0000 (0:00:03.203) 0:00:49.287 ******* 2026-02-16 03:57:51.964130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:57:51.964153 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:57:51.964174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:57:51.964227 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:57:51.964249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:57:51.964269 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:57:51.964319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:57:51.964342 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:57:51.964372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:57:51.964391 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:57:51.964411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:57:51.964430 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:57:51.964449 | orchestrator | 2026-02-16 03:57:51.964469 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-16 03:57:51.964487 | orchestrator | Monday 16 February 2026 03:57:49 +0000 (0:00:01.868) 0:00:51.156 ******* 2026-02-16 03:57:51.964507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:57:51.964542 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:57:51.964577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:57:57.294888 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:57:57.295060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:57:57.295090 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:57:57.295140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:57:57.295164 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:57:57.295175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:57:57.295209 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:57:57.295219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:57:57.295229 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:57:57.295240 | orchestrator | 2026-02-16 03:57:57.295251 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-16 03:57:57.295262 | orchestrator | Monday 16 February 2026 03:57:51 +0000 (0:00:02.548) 0:00:53.704 ******* 2026-02-16 03:57:57.295272 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:57:57.295282 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:57:57.295292 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:57:57.295301 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:57:57.295311 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:57:57.295320 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:57:57.295330 | orchestrator | 2026-02-16 03:57:57.295340 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-16 03:57:57.295350 | orchestrator | Monday 16 February 2026 03:57:54 +0000 (0:00:02.302) 0:00:56.007 ******* 2026-02-16 03:57:57.295359 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:57:57.295369 | orchestrator | 2026-02-16 03:57:57.295379 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-16 03:57:57.295406 | orchestrator | Monday 16 February 2026 03:57:54 +0000 (0:00:00.141) 0:00:56.149 ******* 2026-02-16 03:57:57.295416 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:57:57.295426 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:57:57.295437 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:57:57.295449 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:57:57.295460 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:57:57.295470 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:57:57.295481 | orchestrator | 2026-02-16 03:57:57.295492 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-16 03:57:57.295503 | orchestrator | Monday 16 February 2026 03:57:55 +0000 (0:00:00.620) 0:00:56.769 ******* 2026-02-16 03:57:57.295520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:57:57.295541 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:57:57.295554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:57:57.295567 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:57:57.295580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:57:57.295593 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:57:57.295606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:57:57.295620 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:57:57.295641 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:58:05.207234 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:05.207364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:58:05.207416 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:05.207429 | orchestrator | 2026-02-16 03:58:05.207442 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-16 03:58:05.207454 | orchestrator | Monday 16 February 2026 03:57:57 +0000 (0:00:02.266) 0:00:59.036 ******* 2026-02-16 03:58:05.207467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:58:05.207480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:58:05.207492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:58:05.207530 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-16 03:58:05.207551 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-16 03:58:05.207563 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-16 03:58:05.207575 | orchestrator | 2026-02-16 03:58:05.207586 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-16 03:58:05.207598 | orchestrator | Monday 16 February 2026 03:58:00 +0000 (0:00:03.043) 0:01:02.079 ******* 2026-02-16 03:58:05.207609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:58:05.207621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:58:05.207654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:58:09.648632 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-16 03:58:09.648741 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-16 03:58:09.648758 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-16 03:58:09.648771 | orchestrator | 2026-02-16 03:58:09.648785 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-16 03:58:09.648797 | orchestrator | Monday 16 February 2026 03:58:05 +0000 (0:00:04.872) 0:01:06.951 ******* 2026-02-16 03:58:09.648809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:58:09.648847 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:09.648992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:58:09.649010 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:09.649022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:58:09.649033 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:09.649044 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:58:09.649056 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:09.649067 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:58:09.649086 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:09.649098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:58:09.649109 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:09.649120 | orchestrator | 2026-02-16 03:58:09.649143 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-16 03:58:09.649156 | orchestrator | Monday 16 February 2026 03:58:07 +0000 (0:00:01.818) 0:01:08.769 ******* 2026-02-16 03:58:09.649169 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:09.649181 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:09.649194 | orchestrator | changed: [testbed-node-0] 2026-02-16 03:58:09.649207 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:09.649219 | orchestrator | changed: [testbed-node-1] 2026-02-16 03:58:09.649231 | orchestrator | changed: [testbed-node-2] 2026-02-16 03:58:09.649243 | orchestrator | 2026-02-16 03:58:09.649256 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-16 03:58:09.649275 | orchestrator | Monday 16 February 2026 03:58:09 +0000 (0:00:02.613) 0:01:11.383 ******* 2026-02-16 03:58:27.835016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:58:27.835139 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:27.835159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:58:27.835172 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:27.835184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:58:27.835218 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:27.835232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:58:27.835260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:58:27.835273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:58:27.835284 | orchestrator | 2026-02-16 03:58:27.835297 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-16 03:58:27.835309 | orchestrator | Monday 16 February 2026 03:58:13 +0000 (0:00:03.475) 0:01:14.858 ******* 2026-02-16 03:58:27.835320 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:27.835331 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:27.835342 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:27.835353 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:27.835363 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:27.835374 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:27.835384 | orchestrator | 2026-02-16 03:58:27.835396 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-16 03:58:27.835407 | orchestrator | Monday 16 February 2026 03:58:15 +0000 (0:00:02.097) 0:01:16.955 ******* 2026-02-16 03:58:27.835417 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:27.835440 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:27.835451 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:27.835462 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:27.835472 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:27.835483 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:27.835493 | orchestrator | 2026-02-16 03:58:27.835504 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-16 03:58:27.835515 | orchestrator | Monday 16 February 2026 03:58:17 +0000 (0:00:02.129) 0:01:19.085 ******* 2026-02-16 03:58:27.835526 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:27.835536 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:27.835547 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:27.835559 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:27.835572 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:27.835584 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:27.835596 | orchestrator | 2026-02-16 03:58:27.835609 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-16 03:58:27.835621 | orchestrator | Monday 16 February 2026 03:58:19 +0000 (0:00:02.090) 0:01:21.175 ******* 2026-02-16 03:58:27.835634 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:27.835646 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:27.835659 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:27.835672 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:27.835684 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:27.835696 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:27.835708 | orchestrator | 2026-02-16 03:58:27.835721 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-16 03:58:27.835733 | orchestrator | Monday 16 February 2026 03:58:21 +0000 (0:00:02.032) 0:01:23.207 ******* 2026-02-16 03:58:27.835745 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:27.835757 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:27.835769 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:27.835782 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:27.835822 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:27.835834 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:27.835846 | orchestrator | 2026-02-16 03:58:27.835859 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-16 03:58:27.835871 | orchestrator | Monday 16 February 2026 03:58:23 +0000 (0:00:02.075) 0:01:25.282 ******* 2026-02-16 03:58:27.835883 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:27.835895 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:27.835907 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:27.835918 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:27.835983 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:27.835996 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:27.836007 | orchestrator | 2026-02-16 03:58:27.836018 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-16 03:58:27.836028 | orchestrator | Monday 16 February 2026 03:58:25 +0000 (0:00:02.105) 0:01:27.388 ******* 2026-02-16 03:58:27.836044 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-16 03:58:27.836056 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:27.836074 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-16 03:58:27.836092 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:27.836110 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-16 03:58:27.836126 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:27.836142 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-16 03:58:27.836157 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:27.836186 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-16 03:58:32.195230 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:32.195347 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-16 03:58:32.195374 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:32.195395 | orchestrator | 2026-02-16 03:58:32.195416 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-16 03:58:32.195438 | orchestrator | Monday 16 February 2026 03:58:27 +0000 (0:00:02.178) 0:01:29.566 ******* 2026-02-16 03:58:32.195462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:58:32.195481 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:32.195493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:58:32.195504 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:32.195516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:58:32.195527 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:32.195557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:58:32.195594 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:32.195627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:58:32.195639 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:32.195651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:58:32.195662 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:32.195673 | orchestrator | 2026-02-16 03:58:32.195685 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-16 03:58:32.195702 | orchestrator | Monday 16 February 2026 03:58:29 +0000 (0:00:02.172) 0:01:31.740 ******* 2026-02-16 03:58:32.195721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:58:32.195740 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:32.195759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:58:32.195828 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:32.195865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:58:54.851512 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:54.851621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:58:54.851640 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:54.851653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:58:54.851665 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:54.851755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:58:54.851769 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:54.851780 | orchestrator | 2026-02-16 03:58:54.851792 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-16 03:58:54.851805 | orchestrator | Monday 16 February 2026 03:58:32 +0000 (0:00:02.197) 0:01:33.937 ******* 2026-02-16 03:58:54.851816 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:54.851853 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:54.851865 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:54.851876 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:54.851887 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:54.851898 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:54.851909 | orchestrator | 2026-02-16 03:58:54.851922 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-16 03:58:54.851941 | orchestrator | Monday 16 February 2026 03:58:34 +0000 (0:00:02.031) 0:01:35.968 ******* 2026-02-16 03:58:54.851957 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:54.851984 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:54.852004 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:54.852020 | orchestrator | changed: [testbed-node-4] 2026-02-16 03:58:54.852056 | orchestrator | changed: [testbed-node-5] 2026-02-16 03:58:54.852074 | orchestrator | changed: [testbed-node-3] 2026-02-16 03:58:54.852091 | orchestrator | 2026-02-16 03:58:54.852108 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-16 03:58:54.852123 | orchestrator | Monday 16 February 2026 03:58:37 +0000 (0:00:03.322) 0:01:39.291 ******* 2026-02-16 03:58:54.852139 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:54.852155 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:54.852170 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:54.852186 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:54.852202 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:54.852218 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:54.852234 | orchestrator | 2026-02-16 03:58:54.852251 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-16 03:58:54.852268 | orchestrator | Monday 16 February 2026 03:58:39 +0000 (0:00:02.089) 0:01:41.380 ******* 2026-02-16 03:58:54.852284 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:54.852302 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:54.852318 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:54.852334 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:54.852351 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:54.852368 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:54.852385 | orchestrator | 2026-02-16 03:58:54.852403 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-16 03:58:54.852447 | orchestrator | Monday 16 February 2026 03:58:41 +0000 (0:00:02.226) 0:01:43.607 ******* 2026-02-16 03:58:54.852467 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:54.852485 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:54.852502 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:54.852519 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:54.852538 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:54.852556 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:54.852571 | orchestrator | 2026-02-16 03:58:54.852582 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-16 03:58:54.852593 | orchestrator | Monday 16 February 2026 03:58:44 +0000 (0:00:02.168) 0:01:45.775 ******* 2026-02-16 03:58:54.852604 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:54.852614 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:54.852625 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:54.852636 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:54.852646 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:54.852657 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:54.852668 | orchestrator | 2026-02-16 03:58:54.852712 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-16 03:58:54.852724 | orchestrator | Monday 16 February 2026 03:58:46 +0000 (0:00:02.032) 0:01:47.808 ******* 2026-02-16 03:58:54.852734 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:54.852745 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:54.852756 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:54.852767 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:54.852792 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:54.852803 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:54.852814 | orchestrator | 2026-02-16 03:58:54.852825 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-16 03:58:54.852835 | orchestrator | Monday 16 February 2026 03:58:47 +0000 (0:00:01.695) 0:01:49.504 ******* 2026-02-16 03:58:54.852846 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:54.852857 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:54.852868 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:54.852878 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:54.852889 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:54.852899 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:54.852910 | orchestrator | 2026-02-16 03:58:54.852921 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-16 03:58:54.852932 | orchestrator | Monday 16 February 2026 03:58:49 +0000 (0:00:01.655) 0:01:51.159 ******* 2026-02-16 03:58:54.852943 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:54.852953 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:54.852964 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:54.852975 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:54.852985 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:54.852996 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:54.853007 | orchestrator | 2026-02-16 03:58:54.853018 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-16 03:58:54.853029 | orchestrator | Monday 16 February 2026 03:58:51 +0000 (0:00:01.902) 0:01:53.062 ******* 2026-02-16 03:58:54.853040 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-16 03:58:54.853052 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:54.853063 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-16 03:58:54.853074 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:54.853085 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-16 03:58:54.853096 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:54.853107 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-16 03:58:54.853118 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:54.853128 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-16 03:58:54.853139 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:54.853150 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-16 03:58:54.853161 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:54.853172 | orchestrator | 2026-02-16 03:58:54.853183 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-16 03:58:54.853194 | orchestrator | Monday 16 February 2026 03:58:52 +0000 (0:00:01.644) 0:01:54.706 ******* 2026-02-16 03:58:54.853215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:58:54.853236 | orchestrator | skipping: [testbed-node-0] 2026-02-16 03:58:54.853259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:58:57.006639 | orchestrator | skipping: [testbed-node-2] 2026-02-16 03:58:57.006849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-16 03:58:57.006880 | orchestrator | skipping: [testbed-node-1] 2026-02-16 03:58:57.006902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:58:57.006943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:58:57.006965 | orchestrator | skipping: [testbed-node-5] 2026-02-16 03:58:57.007004 | orchestrator | skipping: [testbed-node-4] 2026-02-16 03:58:57.007025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 03:58:57.007062 | orchestrator | skipping: [testbed-node-3] 2026-02-16 03:58:57.007074 | orchestrator | 2026-02-16 03:58:57.007086 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-16 03:58:57.007099 | orchestrator | Monday 16 February 2026 03:58:54 +0000 (0:00:01.885) 0:01:56.592 ******* 2026-02-16 03:58:57.007131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:58:57.007144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:58:57.007156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-16 03:58:57.007175 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-16 03:58:57.007208 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-16 03:58:57.007230 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-16 04:01:15.579848 | orchestrator | 2026-02-16 04:01:15.579963 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-16 04:01:15.579980 | orchestrator | Monday 16 February 2026 03:58:56 +0000 (0:00:02.154) 0:01:58.747 ******* 2026-02-16 04:01:15.579993 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:01:15.580005 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:01:15.580016 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:01:15.580027 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:01:15.580038 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:01:15.580049 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:01:15.580060 | orchestrator | 2026-02-16 04:01:15.580072 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-16 04:01:15.580083 | orchestrator | Monday 16 February 2026 03:58:57 +0000 (0:00:00.746) 0:01:59.493 ******* 2026-02-16 04:01:15.580094 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:01:15.580105 | orchestrator | 2026-02-16 04:01:15.580116 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-16 04:01:15.580127 | orchestrator | Monday 16 February 2026 03:58:59 +0000 (0:00:02.125) 0:02:01.618 ******* 2026-02-16 04:01:15.580138 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:01:15.580216 | orchestrator | 2026-02-16 04:01:15.580228 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-16 04:01:15.580239 | orchestrator | Monday 16 February 2026 03:59:02 +0000 (0:00:02.291) 0:02:03.910 ******* 2026-02-16 04:01:15.580250 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:01:15.580261 | orchestrator | 2026-02-16 04:01:15.580272 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-16 04:01:15.580283 | orchestrator | Monday 16 February 2026 03:59:44 +0000 (0:00:42.321) 0:02:46.231 ******* 2026-02-16 04:01:15.580294 | orchestrator | 2026-02-16 04:01:15.580305 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-16 04:01:15.580317 | orchestrator | Monday 16 February 2026 03:59:44 +0000 (0:00:00.068) 0:02:46.299 ******* 2026-02-16 04:01:15.580328 | orchestrator | 2026-02-16 04:01:15.580339 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-16 04:01:15.580375 | orchestrator | Monday 16 February 2026 03:59:44 +0000 (0:00:00.069) 0:02:46.369 ******* 2026-02-16 04:01:15.580388 | orchestrator | 2026-02-16 04:01:15.580403 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-16 04:01:15.580415 | orchestrator | Monday 16 February 2026 03:59:44 +0000 (0:00:00.069) 0:02:46.438 ******* 2026-02-16 04:01:15.580428 | orchestrator | 2026-02-16 04:01:15.580441 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-16 04:01:15.580453 | orchestrator | Monday 16 February 2026 03:59:44 +0000 (0:00:00.077) 0:02:46.515 ******* 2026-02-16 04:01:15.580465 | orchestrator | 2026-02-16 04:01:15.580477 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-16 04:01:15.580506 | orchestrator | Monday 16 February 2026 03:59:44 +0000 (0:00:00.068) 0:02:46.584 ******* 2026-02-16 04:01:15.580519 | orchestrator | 2026-02-16 04:01:15.580532 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-16 04:01:15.580544 | orchestrator | Monday 16 February 2026 03:59:44 +0000 (0:00:00.069) 0:02:46.654 ******* 2026-02-16 04:01:15.580557 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:01:15.580570 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:01:15.580589 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:01:15.580607 | orchestrator | 2026-02-16 04:01:15.580626 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-16 04:01:15.580653 | orchestrator | Monday 16 February 2026 04:00:08 +0000 (0:00:24.101) 0:03:10.756 ******* 2026-02-16 04:01:15.580674 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:01:15.580692 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:01:15.580711 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:01:15.580730 | orchestrator | 2026-02-16 04:01:15.580750 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:01:15.580769 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-16 04:01:15.580789 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-16 04:01:15.580801 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-16 04:01:15.580812 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-16 04:01:15.580824 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-16 04:01:15.580844 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-16 04:01:15.580862 | orchestrator | 2026-02-16 04:01:15.580879 | orchestrator | 2026-02-16 04:01:15.580897 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:01:15.580914 | orchestrator | Monday 16 February 2026 04:01:15 +0000 (0:01:06.207) 0:04:16.963 ******* 2026-02-16 04:01:15.580931 | orchestrator | =============================================================================== 2026-02-16 04:01:15.580948 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 66.21s 2026-02-16 04:01:15.580965 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 42.32s 2026-02-16 04:01:15.580981 | orchestrator | neutron : Restart neutron-server container ----------------------------- 24.10s 2026-02-16 04:01:15.581022 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.22s 2026-02-16 04:01:15.581042 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.55s 2026-02-16 04:01:15.581059 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 4.87s 2026-02-16 04:01:15.581092 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.87s 2026-02-16 04:01:15.581109 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.87s 2026-02-16 04:01:15.581127 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.48s 2026-02-16 04:01:15.581170 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.32s 2026-02-16 04:01:15.581190 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.26s 2026-02-16 04:01:15.581207 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.21s 2026-02-16 04:01:15.581224 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.20s 2026-02-16 04:01:15.581243 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.04s 2026-02-16 04:01:15.581260 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.61s 2026-02-16 04:01:15.581277 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.55s 2026-02-16 04:01:15.581294 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.41s 2026-02-16 04:01:15.581310 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 2.30s 2026-02-16 04:01:15.581327 | orchestrator | neutron : Creating Neutron database user and setting permissions -------- 2.29s 2026-02-16 04:01:15.581344 | orchestrator | neutron : Copying over existing policy file ----------------------------- 2.27s 2026-02-16 04:01:17.894646 | orchestrator | 2026-02-16 04:01:17 | INFO  | Task 2d9ce16c-087a-495b-9b13-39779a51cad9 (nova) was prepared for execution. 2026-02-16 04:01:17.894749 | orchestrator | 2026-02-16 04:01:17 | INFO  | It takes a moment until task 2d9ce16c-087a-495b-9b13-39779a51cad9 (nova) has been started and output is visible here. 2026-02-16 04:03:15.488663 | orchestrator | 2026-02-16 04:03:15.488860 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 04:03:15.488889 | orchestrator | 2026-02-16 04:03:15.488909 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-16 04:03:15.488928 | orchestrator | Monday 16 February 2026 04:01:22 +0000 (0:00:00.290) 0:00:00.290 ******* 2026-02-16 04:03:15.488946 | orchestrator | changed: [testbed-manager] 2026-02-16 04:03:15.488965 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:03:15.489002 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:03:15.489019 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:03:15.489037 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:03:15.489053 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:03:15.489070 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:03:15.489087 | orchestrator | 2026-02-16 04:03:15.489103 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 04:03:15.489120 | orchestrator | Monday 16 February 2026 04:01:23 +0000 (0:00:00.820) 0:00:01.111 ******* 2026-02-16 04:03:15.489138 | orchestrator | changed: [testbed-manager] 2026-02-16 04:03:15.489154 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:03:15.489172 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:03:15.489190 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:03:15.489207 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:03:15.489224 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:03:15.489241 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:03:15.489258 | orchestrator | 2026-02-16 04:03:15.489276 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 04:03:15.489294 | orchestrator | Monday 16 February 2026 04:01:23 +0000 (0:00:00.871) 0:00:01.983 ******* 2026-02-16 04:03:15.489312 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-16 04:03:15.489330 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-16 04:03:15.489347 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-16 04:03:15.489365 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-16 04:03:15.489410 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-16 04:03:15.489427 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-16 04:03:15.489445 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-16 04:03:15.489461 | orchestrator | 2026-02-16 04:03:15.489481 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-16 04:03:15.489499 | orchestrator | 2026-02-16 04:03:15.489519 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-16 04:03:15.489537 | orchestrator | Monday 16 February 2026 04:01:24 +0000 (0:00:00.719) 0:00:02.702 ******* 2026-02-16 04:03:15.489555 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:03:15.489597 | orchestrator | 2026-02-16 04:03:15.489628 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-16 04:03:15.489644 | orchestrator | Monday 16 February 2026 04:01:25 +0000 (0:00:00.741) 0:00:03.444 ******* 2026-02-16 04:03:15.489662 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-16 04:03:15.489680 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-16 04:03:15.489696 | orchestrator | 2026-02-16 04:03:15.489713 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-16 04:03:15.489730 | orchestrator | Monday 16 February 2026 04:01:29 +0000 (0:00:04.390) 0:00:07.834 ******* 2026-02-16 04:03:15.489746 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-16 04:03:15.489785 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-16 04:03:15.489804 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:03:15.489821 | orchestrator | 2026-02-16 04:03:15.489838 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-16 04:03:15.489854 | orchestrator | Monday 16 February 2026 04:01:33 +0000 (0:00:04.165) 0:00:11.999 ******* 2026-02-16 04:03:15.489871 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:03:15.489888 | orchestrator | 2026-02-16 04:03:15.489904 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-16 04:03:15.489921 | orchestrator | Monday 16 February 2026 04:01:34 +0000 (0:00:00.639) 0:00:12.639 ******* 2026-02-16 04:03:15.489938 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:03:15.489955 | orchestrator | 2026-02-16 04:03:15.489972 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-16 04:03:15.489988 | orchestrator | Monday 16 February 2026 04:01:35 +0000 (0:00:01.283) 0:00:13.922 ******* 2026-02-16 04:03:15.490004 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:03:15.490089 | orchestrator | 2026-02-16 04:03:15.490111 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-16 04:03:15.490130 | orchestrator | Monday 16 February 2026 04:01:38 +0000 (0:00:02.636) 0:00:16.559 ******* 2026-02-16 04:03:15.490149 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:03:15.490169 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:03:15.490189 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:03:15.490207 | orchestrator | 2026-02-16 04:03:15.490228 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-16 04:03:15.490247 | orchestrator | Monday 16 February 2026 04:01:38 +0000 (0:00:00.306) 0:00:16.865 ******* 2026-02-16 04:03:15.490259 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:03:15.490270 | orchestrator | 2026-02-16 04:03:15.490281 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-16 04:03:15.490292 | orchestrator | Monday 16 February 2026 04:02:11 +0000 (0:00:32.519) 0:00:49.385 ******* 2026-02-16 04:03:15.490303 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:03:15.490313 | orchestrator | 2026-02-16 04:03:15.490324 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-16 04:03:15.490335 | orchestrator | Monday 16 February 2026 04:02:25 +0000 (0:00:14.386) 0:01:03.772 ******* 2026-02-16 04:03:15.490346 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:03:15.490356 | orchestrator | 2026-02-16 04:03:15.490367 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-16 04:03:15.490389 | orchestrator | Monday 16 February 2026 04:02:37 +0000 (0:00:11.534) 0:01:15.306 ******* 2026-02-16 04:03:15.490422 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:03:15.490434 | orchestrator | 2026-02-16 04:03:15.490444 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-16 04:03:15.490455 | orchestrator | Monday 16 February 2026 04:02:37 +0000 (0:00:00.720) 0:01:16.027 ******* 2026-02-16 04:03:15.490466 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:03:15.490477 | orchestrator | 2026-02-16 04:03:15.490488 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-16 04:03:15.490506 | orchestrator | Monday 16 February 2026 04:02:38 +0000 (0:00:00.478) 0:01:16.506 ******* 2026-02-16 04:03:15.490518 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:03:15.490529 | orchestrator | 2026-02-16 04:03:15.490540 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-16 04:03:15.490551 | orchestrator | Monday 16 February 2026 04:02:39 +0000 (0:00:00.721) 0:01:17.227 ******* 2026-02-16 04:03:15.490561 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:03:15.490572 | orchestrator | 2026-02-16 04:03:15.490583 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-16 04:03:15.490594 | orchestrator | Monday 16 February 2026 04:02:56 +0000 (0:00:17.588) 0:01:34.816 ******* 2026-02-16 04:03:15.490605 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:03:15.490615 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:03:15.490626 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:03:15.490637 | orchestrator | 2026-02-16 04:03:15.490648 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-16 04:03:15.490658 | orchestrator | 2026-02-16 04:03:15.490669 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-16 04:03:15.490680 | orchestrator | Monday 16 February 2026 04:02:57 +0000 (0:00:00.310) 0:01:35.126 ******* 2026-02-16 04:03:15.490691 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:03:15.490701 | orchestrator | 2026-02-16 04:03:15.490712 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-16 04:03:15.490723 | orchestrator | Monday 16 February 2026 04:02:57 +0000 (0:00:00.754) 0:01:35.881 ******* 2026-02-16 04:03:15.490734 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:03:15.490744 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:03:15.490755 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:03:15.490826 | orchestrator | 2026-02-16 04:03:15.490848 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-16 04:03:15.490867 | orchestrator | Monday 16 February 2026 04:02:59 +0000 (0:00:02.008) 0:01:37.890 ******* 2026-02-16 04:03:15.490878 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:03:15.490889 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:03:15.490900 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:03:15.490911 | orchestrator | 2026-02-16 04:03:15.491012 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-16 04:03:15.491027 | orchestrator | Monday 16 February 2026 04:03:01 +0000 (0:00:02.087) 0:01:39.977 ******* 2026-02-16 04:03:15.491037 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:03:15.491048 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:03:15.491059 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:03:15.491070 | orchestrator | 2026-02-16 04:03:15.491081 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-16 04:03:15.491092 | orchestrator | Monday 16 February 2026 04:03:02 +0000 (0:00:00.517) 0:01:40.494 ******* 2026-02-16 04:03:15.491103 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-16 04:03:15.491114 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:03:15.491125 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-16 04:03:15.491135 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:03:15.491156 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-16 04:03:15.491168 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-16 04:03:15.491179 | orchestrator | 2026-02-16 04:03:15.491190 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-16 04:03:15.491201 | orchestrator | Monday 16 February 2026 04:03:10 +0000 (0:00:07.742) 0:01:48.236 ******* 2026-02-16 04:03:15.491212 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:03:15.491223 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:03:15.491233 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:03:15.491245 | orchestrator | 2026-02-16 04:03:15.491256 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-16 04:03:15.491267 | orchestrator | Monday 16 February 2026 04:03:10 +0000 (0:00:00.337) 0:01:48.574 ******* 2026-02-16 04:03:15.491278 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-16 04:03:15.491289 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:03:15.491299 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-16 04:03:15.491310 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:03:15.491321 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-16 04:03:15.491332 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:03:15.491343 | orchestrator | 2026-02-16 04:03:15.491354 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-16 04:03:15.491365 | orchestrator | Monday 16 February 2026 04:03:11 +0000 (0:00:01.059) 0:01:49.634 ******* 2026-02-16 04:03:15.491375 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:03:15.491386 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:03:15.491397 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:03:15.491408 | orchestrator | 2026-02-16 04:03:15.491419 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-16 04:03:15.491429 | orchestrator | Monday 16 February 2026 04:03:12 +0000 (0:00:00.488) 0:01:50.122 ******* 2026-02-16 04:03:15.491440 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:03:15.491451 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:03:15.491462 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:03:15.491473 | orchestrator | 2026-02-16 04:03:15.491483 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-16 04:03:15.491494 | orchestrator | Monday 16 February 2026 04:03:13 +0000 (0:00:01.006) 0:01:51.129 ******* 2026-02-16 04:03:15.491506 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:03:15.491516 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:03:15.491537 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:04:32.959493 | orchestrator | 2026-02-16 04:04:32.959639 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-16 04:04:32.959652 | orchestrator | Monday 16 February 2026 04:03:15 +0000 (0:00:02.441) 0:01:53.570 ******* 2026-02-16 04:04:32.959660 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:04:32.959667 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:04:32.959674 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:04:32.959681 | orchestrator | 2026-02-16 04:04:32.959702 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-16 04:04:32.959722 | orchestrator | Monday 16 February 2026 04:03:37 +0000 (0:00:21.761) 0:02:15.332 ******* 2026-02-16 04:04:32.959728 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:04:32.959734 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:04:32.959741 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:04:32.959747 | orchestrator | 2026-02-16 04:04:32.959752 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-16 04:04:32.959758 | orchestrator | Monday 16 February 2026 04:03:49 +0000 (0:00:12.111) 0:02:27.443 ******* 2026-02-16 04:04:32.959764 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:04:32.959770 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:04:32.959776 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:04:32.959782 | orchestrator | 2026-02-16 04:04:32.959788 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-16 04:04:32.959810 | orchestrator | Monday 16 February 2026 04:03:50 +0000 (0:00:01.032) 0:02:28.475 ******* 2026-02-16 04:04:32.959816 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:04:32.959822 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:04:32.959829 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:04:32.959835 | orchestrator | 2026-02-16 04:04:32.959841 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-16 04:04:32.959846 | orchestrator | Monday 16 February 2026 04:04:02 +0000 (0:00:12.315) 0:02:40.791 ******* 2026-02-16 04:04:32.959852 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:04:32.959858 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:04:32.959863 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:04:32.959869 | orchestrator | 2026-02-16 04:04:32.959875 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-16 04:04:32.959881 | orchestrator | Monday 16 February 2026 04:04:03 +0000 (0:00:01.037) 0:02:41.829 ******* 2026-02-16 04:04:32.959886 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:04:32.959892 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:04:32.959898 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:04:32.959903 | orchestrator | 2026-02-16 04:04:32.959909 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-16 04:04:32.959914 | orchestrator | 2026-02-16 04:04:32.959920 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-16 04:04:32.959926 | orchestrator | Monday 16 February 2026 04:04:04 +0000 (0:00:00.313) 0:02:42.143 ******* 2026-02-16 04:04:32.959932 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:04:32.959940 | orchestrator | 2026-02-16 04:04:32.959946 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-16 04:04:32.959952 | orchestrator | Monday 16 February 2026 04:04:04 +0000 (0:00:00.721) 0:02:42.865 ******* 2026-02-16 04:04:32.959958 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-16 04:04:32.959964 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-16 04:04:32.959970 | orchestrator | 2026-02-16 04:04:32.959976 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-16 04:04:32.959981 | orchestrator | Monday 16 February 2026 04:04:08 +0000 (0:00:03.243) 0:02:46.109 ******* 2026-02-16 04:04:32.959987 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-16 04:04:32.959996 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-16 04:04:32.960002 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-16 04:04:32.960009 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-16 04:04:32.960016 | orchestrator | 2026-02-16 04:04:32.960022 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-16 04:04:32.960028 | orchestrator | Monday 16 February 2026 04:04:14 +0000 (0:00:06.316) 0:02:52.425 ******* 2026-02-16 04:04:32.960035 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-16 04:04:32.960041 | orchestrator | 2026-02-16 04:04:32.960047 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-16 04:04:32.960053 | orchestrator | Monday 16 February 2026 04:04:17 +0000 (0:00:03.139) 0:02:55.564 ******* 2026-02-16 04:04:32.960060 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-16 04:04:32.960066 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-16 04:04:32.960072 | orchestrator | 2026-02-16 04:04:32.960078 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-16 04:04:32.960084 | orchestrator | Monday 16 February 2026 04:04:21 +0000 (0:00:03.788) 0:02:59.353 ******* 2026-02-16 04:04:32.960095 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-16 04:04:32.960101 | orchestrator | 2026-02-16 04:04:32.960107 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-16 04:04:32.960113 | orchestrator | Monday 16 February 2026 04:04:24 +0000 (0:00:03.090) 0:03:02.443 ******* 2026-02-16 04:04:32.960119 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-16 04:04:32.960125 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-16 04:04:32.960131 | orchestrator | 2026-02-16 04:04:32.960138 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-16 04:04:32.960159 | orchestrator | Monday 16 February 2026 04:04:31 +0000 (0:00:07.305) 0:03:09.749 ******* 2026-02-16 04:04:32.960176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-16 04:04:32.960185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-16 04:04:32.960193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-16 04:04:32.960211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:04:37.410982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:04:37.411081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:04:37.411096 | orchestrator | 2026-02-16 04:04:37.411110 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-16 04:04:37.411123 | orchestrator | Monday 16 February 2026 04:04:32 +0000 (0:00:01.293) 0:03:11.043 ******* 2026-02-16 04:04:37.411134 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:04:37.411146 | orchestrator | 2026-02-16 04:04:37.411158 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-16 04:04:37.411169 | orchestrator | Monday 16 February 2026 04:04:33 +0000 (0:00:00.144) 0:03:11.187 ******* 2026-02-16 04:04:37.411179 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:04:37.411190 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:04:37.411201 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:04:37.411211 | orchestrator | 2026-02-16 04:04:37.411222 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-16 04:04:37.411233 | orchestrator | Monday 16 February 2026 04:04:33 +0000 (0:00:00.297) 0:03:11.484 ******* 2026-02-16 04:04:37.411244 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 04:04:37.411254 | orchestrator | 2026-02-16 04:04:37.411265 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-16 04:04:37.411276 | orchestrator | Monday 16 February 2026 04:04:34 +0000 (0:00:00.690) 0:03:12.175 ******* 2026-02-16 04:04:37.411287 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:04:37.411297 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:04:37.411308 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:04:37.411319 | orchestrator | 2026-02-16 04:04:37.411329 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-16 04:04:37.411365 | orchestrator | Monday 16 February 2026 04:04:34 +0000 (0:00:00.503) 0:03:12.678 ******* 2026-02-16 04:04:37.411377 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:04:37.411389 | orchestrator | 2026-02-16 04:04:37.411400 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-16 04:04:37.411411 | orchestrator | Monday 16 February 2026 04:04:35 +0000 (0:00:00.546) 0:03:13.225 ******* 2026-02-16 04:04:37.411426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-16 04:04:37.411465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-16 04:04:37.411509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-16 04:04:37.411533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:04:37.411611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:04:37.411625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:04:37.411638 | orchestrator | 2026-02-16 04:04:37.411660 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-16 04:04:39.034086 | orchestrator | Monday 16 February 2026 04:04:37 +0000 (0:00:02.271) 0:03:15.497 ******* 2026-02-16 04:04:39.034184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-16 04:04:39.034200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:04:39.034228 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:04:39.034239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-16 04:04:39.034248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:04:39.034256 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:04:39.034288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-16 04:04:39.034298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:04:39.034312 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:04:39.034320 | orchestrator | 2026-02-16 04:04:39.034329 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-16 04:04:39.034351 | orchestrator | Monday 16 February 2026 04:04:38 +0000 (0:00:00.795) 0:03:16.292 ******* 2026-02-16 04:04:39.034360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-16 04:04:39.034369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:04:39.034378 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:04:39.034406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-16 04:04:41.284991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:04:41.285096 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:04:41.285109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-16 04:04:41.285118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:04:41.285125 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:04:41.285133 | orchestrator | 2026-02-16 04:04:41.285141 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-16 04:04:41.285149 | orchestrator | Monday 16 February 2026 04:04:39 +0000 (0:00:00.833) 0:03:17.125 ******* 2026-02-16 04:04:41.285168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-16 04:04:41.285189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-16 04:04:41.285203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-16 04:04:41.285211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:04:41.285222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:04:41.285234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:04:47.190414 | orchestrator | 2026-02-16 04:04:47.190487 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-16 04:04:47.190548 | orchestrator | Monday 16 February 2026 04:04:41 +0000 (0:00:02.249) 0:03:19.374 ******* 2026-02-16 04:04:47.190558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-16 04:04:47.190565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-16 04:04:47.190580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-16 04:04:47.190596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:04:47.190606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:04:47.190611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:04:47.190615 | orchestrator | 2026-02-16 04:04:47.190619 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-16 04:04:47.190623 | orchestrator | Monday 16 February 2026 04:04:46 +0000 (0:00:05.328) 0:03:24.702 ******* 2026-02-16 04:04:47.190627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-16 04:04:47.190634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:04:47.190639 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:04:47.190649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-16 04:04:51.414154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:04:51.414231 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:04:51.414240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-16 04:04:51.414256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:04:51.414261 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:04:51.414265 | orchestrator | 2026-02-16 04:04:51.414270 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-16 04:04:51.414275 | orchestrator | Monday 16 February 2026 04:04:47 +0000 (0:00:00.579) 0:03:25.282 ******* 2026-02-16 04:04:51.414294 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:04:51.414298 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:04:51.414302 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:04:51.414305 | orchestrator | 2026-02-16 04:04:51.414309 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-16 04:04:51.414313 | orchestrator | Monday 16 February 2026 04:04:48 +0000 (0:00:01.477) 0:03:26.760 ******* 2026-02-16 04:04:51.414317 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:04:51.414321 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:04:51.414324 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:04:51.414328 | orchestrator | 2026-02-16 04:04:51.414332 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-16 04:04:51.414336 | orchestrator | Monday 16 February 2026 04:04:49 +0000 (0:00:00.340) 0:03:27.100 ******* 2026-02-16 04:04:51.414350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-16 04:04:51.414355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-16 04:04:51.414362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-16 04:04:51.414370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:04:51.414374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:04:51.414382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:05:31.777333 | orchestrator | 2026-02-16 04:05:31.777502 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-16 04:05:31.777526 | orchestrator | Monday 16 February 2026 04:04:51 +0000 (0:00:02.008) 0:03:29.108 ******* 2026-02-16 04:05:31.777541 | orchestrator | 2026-02-16 04:05:31.777561 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-16 04:05:31.777579 | orchestrator | Monday 16 February 2026 04:04:51 +0000 (0:00:00.133) 0:03:29.242 ******* 2026-02-16 04:05:31.777593 | orchestrator | 2026-02-16 04:05:31.777607 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-16 04:05:31.777623 | orchestrator | Monday 16 February 2026 04:04:51 +0000 (0:00:00.126) 0:03:29.369 ******* 2026-02-16 04:05:31.777636 | orchestrator | 2026-02-16 04:05:31.777650 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-16 04:05:31.777659 | orchestrator | Monday 16 February 2026 04:04:51 +0000 (0:00:00.127) 0:03:29.496 ******* 2026-02-16 04:05:31.777667 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:05:31.777676 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:05:31.777684 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:05:31.777692 | orchestrator | 2026-02-16 04:05:31.777701 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-16 04:05:31.777709 | orchestrator | Monday 16 February 2026 04:05:09 +0000 (0:00:17.669) 0:03:47.166 ******* 2026-02-16 04:05:31.777717 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:05:31.777725 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:05:31.777756 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:05:31.777764 | orchestrator | 2026-02-16 04:05:31.777772 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-16 04:05:31.777780 | orchestrator | 2026-02-16 04:05:31.777788 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-16 04:05:31.777796 | orchestrator | Monday 16 February 2026 04:05:19 +0000 (0:00:10.196) 0:03:57.363 ******* 2026-02-16 04:05:31.777805 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:05:31.777815 | orchestrator | 2026-02-16 04:05:31.777823 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-16 04:05:31.777831 | orchestrator | Monday 16 February 2026 04:05:20 +0000 (0:00:01.206) 0:03:58.569 ******* 2026-02-16 04:05:31.777838 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:05:31.777846 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:05:31.777854 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:05:31.777862 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:05:31.777874 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:05:31.777892 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:05:31.777910 | orchestrator | 2026-02-16 04:05:31.777940 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-16 04:05:31.777954 | orchestrator | Monday 16 February 2026 04:05:21 +0000 (0:00:00.763) 0:03:59.333 ******* 2026-02-16 04:05:31.777967 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:05:31.777980 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:05:31.777993 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:05:31.778005 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 04:05:31.778082 | orchestrator | 2026-02-16 04:05:31.778099 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-16 04:05:31.778112 | orchestrator | Monday 16 February 2026 04:05:22 +0000 (0:00:00.822) 0:04:00.155 ******* 2026-02-16 04:05:31.778126 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-16 04:05:31.778140 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-16 04:05:31.778152 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-16 04:05:31.778166 | orchestrator | 2026-02-16 04:05:31.778178 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-16 04:05:31.778191 | orchestrator | Monday 16 February 2026 04:05:22 +0000 (0:00:00.856) 0:04:01.012 ******* 2026-02-16 04:05:31.778204 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-16 04:05:31.778217 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-16 04:05:31.778232 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-16 04:05:31.778244 | orchestrator | 2026-02-16 04:05:31.778258 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-16 04:05:31.778272 | orchestrator | Monday 16 February 2026 04:05:24 +0000 (0:00:01.192) 0:04:02.204 ******* 2026-02-16 04:05:31.778284 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-16 04:05:31.778296 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:05:31.778308 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-16 04:05:31.778323 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:05:31.778338 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-16 04:05:31.778352 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:05:31.778366 | orchestrator | 2026-02-16 04:05:31.778380 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-16 04:05:31.778394 | orchestrator | Monday 16 February 2026 04:05:24 +0000 (0:00:00.570) 0:04:02.775 ******* 2026-02-16 04:05:31.778432 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-16 04:05:31.778446 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 04:05:31.778474 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 04:05:31.778488 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:05:31.778501 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-16 04:05:31.778514 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 04:05:31.778529 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 04:05:31.778542 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:05:31.778580 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 04:05:31.778595 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 04:05:31.778610 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:05:31.778624 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-16 04:05:31.778639 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-16 04:05:31.778654 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-16 04:05:31.778668 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-16 04:05:31.778683 | orchestrator | 2026-02-16 04:05:31.778698 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-16 04:05:31.778712 | orchestrator | Monday 16 February 2026 04:05:26 +0000 (0:00:02.277) 0:04:05.052 ******* 2026-02-16 04:05:31.778727 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:05:31.778742 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:05:31.778757 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:05:31.778772 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:05:31.778787 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:05:31.778801 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:05:31.778816 | orchestrator | 2026-02-16 04:05:31.778831 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-16 04:05:31.778844 | orchestrator | Monday 16 February 2026 04:05:28 +0000 (0:00:01.233) 0:04:06.285 ******* 2026-02-16 04:05:31.778857 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:05:31.778871 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:05:31.778885 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:05:31.778899 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:05:31.778913 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:05:31.778928 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:05:31.778942 | orchestrator | 2026-02-16 04:05:31.778957 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-16 04:05:31.778971 | orchestrator | Monday 16 February 2026 04:05:30 +0000 (0:00:01.833) 0:04:08.119 ******* 2026-02-16 04:05:31.779024 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-16 04:05:31.779049 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-16 04:05:31.779089 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-16 04:05:33.532798 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-16 04:05:33.532919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-16 04:05:33.532939 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-16 04:05:33.532975 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-16 04:05:33.532994 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-16 04:05:33.533070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-16 04:05:33.533107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:05:33.533123 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-16 04:05:33.533145 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-16 04:05:33.533158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-16 04:05:33.533182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:05:33.533198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:05:33.533213 | orchestrator | 2026-02-16 04:05:33.533230 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-16 04:05:33.533248 | orchestrator | Monday 16 February 2026 04:05:32 +0000 (0:00:02.224) 0:04:10.344 ******* 2026-02-16 04:05:33.533265 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:05:33.533280 | orchestrator | 2026-02-16 04:05:33.533295 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-16 04:05:33.533318 | orchestrator | Monday 16 February 2026 04:05:33 +0000 (0:00:01.275) 0:04:11.620 ******* 2026-02-16 04:05:36.782676 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-16 04:05:36.782788 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-16 04:05:36.782800 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-16 04:05:36.782828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-16 04:05:36.782837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-16 04:05:36.782859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-16 04:05:36.782869 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-16 04:05:36.782878 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-16 04:05:36.782890 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-16 04:05:36.782904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:05:36.782913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:05:36.782921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:05:36.782935 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-16 04:05:38.685486 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-16 04:05:38.685667 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-16 04:05:38.685729 | orchestrator | 2026-02-16 04:05:38.685763 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-16 04:05:38.685785 | orchestrator | Monday 16 February 2026 04:05:37 +0000 (0:00:03.568) 0:04:15.188 ******* 2026-02-16 04:05:38.685806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-16 04:05:38.685828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-16 04:05:38.685848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-16 04:05:38.685867 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:05:38.685911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-16 04:05:38.685953 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-16 04:05:38.685976 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-16 04:05:38.685995 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:05:38.686015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-16 04:05:38.686126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-16 04:05:38.686164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-16 04:05:40.390196 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:05:40.390314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-16 04:05:40.390333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 04:05:40.390344 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:05:40.390355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-16 04:05:40.390367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 04:05:40.390423 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:05:40.390437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-16 04:05:40.390449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 04:05:40.390461 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:05:40.390497 | orchestrator | 2026-02-16 04:05:40.390512 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-16 04:05:40.390524 | orchestrator | Monday 16 February 2026 04:05:38 +0000 (0:00:01.672) 0:04:16.861 ******* 2026-02-16 04:05:40.390564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-16 04:05:40.390579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-16 04:05:40.390593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-16 04:05:40.390605 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:05:40.390617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-16 04:05:40.390630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-16 04:05:40.390658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-16 04:05:47.568840 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:05:47.568919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-16 04:05:47.568929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-16 04:05:47.568935 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-16 04:05:47.568940 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:05:47.568945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-16 04:05:47.568965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 04:05:47.568969 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:05:47.568982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-16 04:05:47.568989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 04:05:47.568993 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:05:47.568997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-16 04:05:47.569002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 04:05:47.569009 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:05:47.569013 | orchestrator | 2026-02-16 04:05:47.569018 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-16 04:05:47.569023 | orchestrator | Monday 16 February 2026 04:05:41 +0000 (0:00:02.336) 0:04:19.197 ******* 2026-02-16 04:05:47.569027 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:05:47.569031 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:05:47.569035 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:05:47.569039 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 04:05:47.569043 | orchestrator | 2026-02-16 04:05:47.569048 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-16 04:05:47.569056 | orchestrator | Monday 16 February 2026 04:05:41 +0000 (0:00:00.863) 0:04:20.061 ******* 2026-02-16 04:05:47.569060 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-16 04:05:47.569064 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-16 04:05:47.569067 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-16 04:05:47.569073 | orchestrator | 2026-02-16 04:05:47.569079 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-16 04:05:47.569085 | orchestrator | Monday 16 February 2026 04:05:43 +0000 (0:00:01.081) 0:04:21.143 ******* 2026-02-16 04:05:47.569092 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-16 04:05:47.569099 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-16 04:05:47.569105 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-16 04:05:47.569111 | orchestrator | 2026-02-16 04:05:47.569117 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-16 04:05:47.569123 | orchestrator | Monday 16 February 2026 04:05:43 +0000 (0:00:00.885) 0:04:22.028 ******* 2026-02-16 04:05:47.569129 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:05:47.569136 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:05:47.569139 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:05:47.569143 | orchestrator | 2026-02-16 04:05:47.569147 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-16 04:05:47.569153 | orchestrator | Monday 16 February 2026 04:05:44 +0000 (0:00:00.504) 0:04:22.533 ******* 2026-02-16 04:05:47.569159 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:05:47.569165 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:05:47.569172 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:05:47.569178 | orchestrator | 2026-02-16 04:05:47.569184 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-16 04:05:47.569190 | orchestrator | Monday 16 February 2026 04:05:44 +0000 (0:00:00.508) 0:04:23.041 ******* 2026-02-16 04:05:47.569197 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-16 04:05:47.569204 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-16 04:05:47.569211 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-16 04:05:47.569218 | orchestrator | 2026-02-16 04:05:47.569224 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-16 04:05:47.569229 | orchestrator | Monday 16 February 2026 04:05:46 +0000 (0:00:01.375) 0:04:24.416 ******* 2026-02-16 04:05:47.569238 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-16 04:06:05.262730 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-16 04:06:05.262847 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-16 04:06:05.262864 | orchestrator | 2026-02-16 04:06:05.262877 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-16 04:06:05.262907 | orchestrator | Monday 16 February 2026 04:05:47 +0000 (0:00:01.240) 0:04:25.657 ******* 2026-02-16 04:06:05.262919 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-16 04:06:05.262932 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-16 04:06:05.262951 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-16 04:06:05.262968 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-16 04:06:05.262986 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-16 04:06:05.263004 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-16 04:06:05.263022 | orchestrator | 2026-02-16 04:06:05.263041 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-16 04:06:05.263060 | orchestrator | Monday 16 February 2026 04:05:51 +0000 (0:00:03.685) 0:04:29.342 ******* 2026-02-16 04:06:05.263080 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:06:05.263097 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:06:05.263108 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:06:05.263120 | orchestrator | 2026-02-16 04:06:05.263131 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-16 04:06:05.263163 | orchestrator | Monday 16 February 2026 04:05:51 +0000 (0:00:00.293) 0:04:29.636 ******* 2026-02-16 04:06:05.263175 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:06:05.263185 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:06:05.263196 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:06:05.263207 | orchestrator | 2026-02-16 04:06:05.263217 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-16 04:06:05.263228 | orchestrator | Monday 16 February 2026 04:05:52 +0000 (0:00:00.496) 0:04:30.132 ******* 2026-02-16 04:06:05.263239 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:06:05.263253 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:06:05.263265 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:06:05.263278 | orchestrator | 2026-02-16 04:06:05.263291 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-16 04:06:05.263304 | orchestrator | Monday 16 February 2026 04:05:53 +0000 (0:00:01.184) 0:04:31.317 ******* 2026-02-16 04:06:05.263348 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-16 04:06:05.263367 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-16 04:06:05.263380 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-16 04:06:05.263394 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-16 04:06:05.263407 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-16 04:06:05.263421 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-16 04:06:05.263434 | orchestrator | 2026-02-16 04:06:05.263447 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-16 04:06:05.263459 | orchestrator | Monday 16 February 2026 04:05:56 +0000 (0:00:03.181) 0:04:34.499 ******* 2026-02-16 04:06:05.263472 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-16 04:06:05.263485 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-16 04:06:05.263497 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-16 04:06:05.263510 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-16 04:06:05.263523 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:06:05.263536 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-16 04:06:05.263548 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:06:05.263561 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-16 04:06:05.263573 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:06:05.263586 | orchestrator | 2026-02-16 04:06:05.263600 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-16 04:06:05.263613 | orchestrator | Monday 16 February 2026 04:05:59 +0000 (0:00:03.093) 0:04:37.592 ******* 2026-02-16 04:06:05.263625 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:06:05.263636 | orchestrator | 2026-02-16 04:06:05.263647 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-16 04:06:05.263657 | orchestrator | Monday 16 February 2026 04:05:59 +0000 (0:00:00.131) 0:04:37.723 ******* 2026-02-16 04:06:05.263668 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:06:05.263678 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:06:05.263690 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:06:05.263700 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:06:05.263711 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:06:05.263722 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:06:05.263733 | orchestrator | 2026-02-16 04:06:05.263744 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-16 04:06:05.263763 | orchestrator | Monday 16 February 2026 04:06:00 +0000 (0:00:00.788) 0:04:38.512 ******* 2026-02-16 04:06:05.263774 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-16 04:06:05.263785 | orchestrator | 2026-02-16 04:06:05.263795 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-16 04:06:05.263806 | orchestrator | Monday 16 February 2026 04:06:01 +0000 (0:00:00.671) 0:04:39.183 ******* 2026-02-16 04:06:05.263817 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:06:05.263847 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:06:05.263859 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:06:05.263869 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:06:05.263880 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:06:05.263891 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:06:05.263901 | orchestrator | 2026-02-16 04:06:05.263919 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-16 04:06:05.263931 | orchestrator | Monday 16 February 2026 04:06:01 +0000 (0:00:00.759) 0:04:39.942 ******* 2026-02-16 04:06:05.263960 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-16 04:06:05.263987 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-16 04:06:05.263999 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-16 04:06:05.264012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-16 04:06:05.264040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-16 04:06:11.556635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-16 04:06:11.556740 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-16 04:06:11.556759 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-16 04:06:11.556779 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-16 04:06:11.556798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:06:11.556848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:06:11.556895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:06:11.556928 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-16 04:06:11.556943 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-16 04:06:11.556955 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-16 04:06:11.556967 | orchestrator | 2026-02-16 04:06:11.556981 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-16 04:06:11.556993 | orchestrator | Monday 16 February 2026 04:06:05 +0000 (0:00:03.611) 0:04:43.554 ******* 2026-02-16 04:06:11.557005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-16 04:06:11.557027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-16 04:06:11.557055 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-16 04:06:11.862480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-16 04:06:11.862585 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-16 04:06:11.862601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-16 04:06:11.862640 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-16 04:06:11.862668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-16 04:06:11.862700 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-16 04:06:11.862714 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-16 04:06:11.862727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-16 04:06:11.862747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-16 04:06:11.862759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:06:11.862776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:06:11.862790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:06:11.862803 | orchestrator | 2026-02-16 04:06:11.862817 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-16 04:06:11.862837 | orchestrator | Monday 16 February 2026 04:06:11 +0000 (0:00:06.397) 0:04:49.951 ******* 2026-02-16 04:06:32.105676 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:06:32.105778 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:06:32.105794 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:06:32.105806 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:06:32.105817 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:06:32.105829 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:06:32.105840 | orchestrator | 2026-02-16 04:06:32.105853 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-16 04:06:32.105865 | orchestrator | Monday 16 February 2026 04:06:13 +0000 (0:00:01.346) 0:04:51.298 ******* 2026-02-16 04:06:32.105876 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-16 04:06:32.105888 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-16 04:06:32.105898 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-16 04:06:32.105909 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-16 04:06:32.105920 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-16 04:06:32.105956 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-16 04:06:32.105968 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-16 04:06:32.105980 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:06:32.105990 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-16 04:06:32.106001 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:06:32.106012 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-16 04:06:32.106094 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:06:32.106105 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-16 04:06:32.106117 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-16 04:06:32.106127 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-16 04:06:32.106139 | orchestrator | 2026-02-16 04:06:32.106150 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-16 04:06:32.106163 | orchestrator | Monday 16 February 2026 04:06:16 +0000 (0:00:03.563) 0:04:54.861 ******* 2026-02-16 04:06:32.106176 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:06:32.106189 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:06:32.106202 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:06:32.106215 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:06:32.106227 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:06:32.106239 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:06:32.106252 | orchestrator | 2026-02-16 04:06:32.106314 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-16 04:06:32.106327 | orchestrator | Monday 16 February 2026 04:06:17 +0000 (0:00:00.543) 0:04:55.404 ******* 2026-02-16 04:06:32.106340 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-16 04:06:32.106354 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-16 04:06:32.106367 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-16 04:06:32.106381 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-16 04:06:32.106394 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-16 04:06:32.106406 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-16 04:06:32.106419 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-16 04:06:32.106431 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-16 04:06:32.106458 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-16 04:06:32.106472 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-16 04:06:32.106484 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:06:32.106497 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-16 04:06:32.106510 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:06:32.106521 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-16 04:06:32.106531 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:06:32.106552 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-16 04:06:32.106563 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-16 04:06:32.106593 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-16 04:06:32.106604 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-16 04:06:32.106615 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-16 04:06:32.106626 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-16 04:06:32.106637 | orchestrator | 2026-02-16 04:06:32.106648 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-16 04:06:32.106659 | orchestrator | Monday 16 February 2026 04:06:22 +0000 (0:00:04.833) 0:05:00.237 ******* 2026-02-16 04:06:32.106670 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-16 04:06:32.106681 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-16 04:06:32.106691 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-16 04:06:32.106702 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-16 04:06:32.106713 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-16 04:06:32.106724 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-16 04:06:32.106734 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-16 04:06:32.106745 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-16 04:06:32.106756 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-16 04:06:32.106766 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-16 04:06:32.106777 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-16 04:06:32.106788 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-16 04:06:32.106799 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-16 04:06:32.106810 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:06:32.106820 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-16 04:06:32.106831 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:06:32.106842 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-16 04:06:32.106852 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:06:32.106863 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-16 04:06:32.106874 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-16 04:06:32.106885 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-16 04:06:32.106896 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-16 04:06:32.106907 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-16 04:06:32.106917 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-16 04:06:32.106928 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-16 04:06:32.106939 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-16 04:06:32.106956 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-16 04:06:32.106967 | orchestrator | 2026-02-16 04:06:32.106978 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-16 04:06:32.106989 | orchestrator | Monday 16 February 2026 04:06:28 +0000 (0:00:06.483) 0:05:06.721 ******* 2026-02-16 04:06:32.106999 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:06:32.107010 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:06:32.107021 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:06:32.107031 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:06:32.107056 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:06:32.107084 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:06:32.107096 | orchestrator | 2026-02-16 04:06:32.107106 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-16 04:06:32.107117 | orchestrator | Monday 16 February 2026 04:06:29 +0000 (0:00:00.798) 0:05:07.520 ******* 2026-02-16 04:06:32.107128 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:06:32.107139 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:06:32.107150 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:06:32.107161 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:06:32.107171 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:06:32.107182 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:06:32.107193 | orchestrator | 2026-02-16 04:06:32.107204 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-16 04:06:32.107215 | orchestrator | Monday 16 February 2026 04:06:30 +0000 (0:00:00.607) 0:05:08.127 ******* 2026-02-16 04:06:32.107225 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:06:32.107236 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:06:32.107247 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:06:32.107276 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:06:32.107288 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:06:32.107298 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:06:32.107309 | orchestrator | 2026-02-16 04:06:32.107327 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-16 04:06:33.108515 | orchestrator | Monday 16 February 2026 04:06:32 +0000 (0:00:02.059) 0:05:10.186 ******* 2026-02-16 04:06:33.108618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-16 04:06:33.108637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-16 04:06:33.108651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-16 04:06:33.108690 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:06:33.108716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-16 04:06:33.108729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-16 04:06:33.108760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-16 04:06:33.108772 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:06:33.108783 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-16 04:06:33.108795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-16 04:06:33.108815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-16 04:06:33.108826 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:06:33.108844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-16 04:06:33.108864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 04:06:36.710760 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:06:36.710870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-16 04:06:36.710889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 04:06:36.710902 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:06:36.710936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-16 04:06:36.710949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 04:06:36.710960 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:06:36.710972 | orchestrator | 2026-02-16 04:06:36.710985 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-16 04:06:36.710997 | orchestrator | Monday 16 February 2026 04:06:33 +0000 (0:00:01.401) 0:05:11.588 ******* 2026-02-16 04:06:36.711008 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-16 04:06:36.711020 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-16 04:06:36.711031 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:06:36.711042 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-16 04:06:36.711053 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-16 04:06:36.711063 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:06:36.711074 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-16 04:06:36.711098 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-16 04:06:36.711110 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:06:36.711121 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-16 04:06:36.711131 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-16 04:06:36.711142 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:06:36.711158 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-16 04:06:36.711175 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-16 04:06:36.711193 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:06:36.711209 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-16 04:06:36.711228 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-16 04:06:36.711314 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:06:36.711340 | orchestrator | 2026-02-16 04:06:36.711364 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-16 04:06:36.711385 | orchestrator | Monday 16 February 2026 04:06:34 +0000 (0:00:00.881) 0:05:12.470 ******* 2026-02-16 04:06:36.711433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-16 04:06:36.711475 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-16 04:06:36.711500 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-16 04:06:36.711524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-16 04:06:36.711558 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-16 04:06:36.711594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-16 04:07:29.831052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-16 04:07:29.831228 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-16 04:07:29.831246 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-16 04:07:29.831254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:07:29.831276 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-16 04:07:29.831286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:07:29.831310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:07:29.831325 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-16 04:07:29.831333 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-16 04:07:29.831342 | orchestrator | 2026-02-16 04:07:29.831351 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-16 04:07:29.831359 | orchestrator | Monday 16 February 2026 04:06:36 +0000 (0:00:02.620) 0:05:15.090 ******* 2026-02-16 04:07:29.831367 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:07:29.831376 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:07:29.831383 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:07:29.831390 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:07:29.831397 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:07:29.831404 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:07:29.831412 | orchestrator | 2026-02-16 04:07:29.831419 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-16 04:07:29.831427 | orchestrator | Monday 16 February 2026 04:06:37 +0000 (0:00:00.755) 0:05:15.846 ******* 2026-02-16 04:07:29.831434 | orchestrator | 2026-02-16 04:07:29.831441 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-16 04:07:29.831449 | orchestrator | Monday 16 February 2026 04:06:37 +0000 (0:00:00.156) 0:05:16.002 ******* 2026-02-16 04:07:29.831456 | orchestrator | 2026-02-16 04:07:29.831463 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-16 04:07:29.831471 | orchestrator | Monday 16 February 2026 04:06:38 +0000 (0:00:00.144) 0:05:16.147 ******* 2026-02-16 04:07:29.831478 | orchestrator | 2026-02-16 04:07:29.831486 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-16 04:07:29.831493 | orchestrator | Monday 16 February 2026 04:06:38 +0000 (0:00:00.142) 0:05:16.290 ******* 2026-02-16 04:07:29.831501 | orchestrator | 2026-02-16 04:07:29.831513 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-16 04:07:29.831520 | orchestrator | Monday 16 February 2026 04:06:38 +0000 (0:00:00.157) 0:05:16.448 ******* 2026-02-16 04:07:29.831527 | orchestrator | 2026-02-16 04:07:29.831535 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-16 04:07:29.831542 | orchestrator | Monday 16 February 2026 04:06:38 +0000 (0:00:00.327) 0:05:16.776 ******* 2026-02-16 04:07:29.831554 | orchestrator | 2026-02-16 04:07:29.831562 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-16 04:07:29.831569 | orchestrator | Monday 16 February 2026 04:06:38 +0000 (0:00:00.157) 0:05:16.934 ******* 2026-02-16 04:07:29.831576 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:07:29.831583 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:07:29.831591 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:07:29.831598 | orchestrator | 2026-02-16 04:07:29.831605 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-16 04:07:29.831613 | orchestrator | Monday 16 February 2026 04:06:45 +0000 (0:00:06.773) 0:05:23.707 ******* 2026-02-16 04:07:29.831620 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:07:29.831627 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:07:29.831634 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:07:29.831641 | orchestrator | 2026-02-16 04:07:29.831649 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-16 04:07:29.831656 | orchestrator | Monday 16 February 2026 04:07:04 +0000 (0:00:18.978) 0:05:42.686 ******* 2026-02-16 04:07:29.831663 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:07:29.831676 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:07:29.831688 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:07:29.831701 | orchestrator | 2026-02-16 04:07:29.831737 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-16 04:09:50.971180 | orchestrator | Monday 16 February 2026 04:07:29 +0000 (0:00:25.221) 0:06:07.908 ******* 2026-02-16 04:09:50.971279 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:09:50.971289 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:09:50.971295 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:09:50.971301 | orchestrator | 2026-02-16 04:09:50.971314 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-16 04:09:50.971321 | orchestrator | Monday 16 February 2026 04:08:07 +0000 (0:00:37.465) 0:06:45.374 ******* 2026-02-16 04:09:50.971328 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-02-16 04:09:50.971336 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-02-16 04:09:50.971343 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-02-16 04:09:50.971349 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:09:50.971355 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:09:50.971361 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:09:50.971367 | orchestrator | 2026-02-16 04:09:50.971373 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-16 04:09:50.971380 | orchestrator | Monday 16 February 2026 04:08:13 +0000 (0:00:06.312) 0:06:51.687 ******* 2026-02-16 04:09:50.971391 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:09:50.971398 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:09:50.971404 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:09:50.971411 | orchestrator | 2026-02-16 04:09:50.971417 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-16 04:09:50.971424 | orchestrator | Monday 16 February 2026 04:08:14 +0000 (0:00:00.805) 0:06:52.493 ******* 2026-02-16 04:09:50.971431 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:09:50.971442 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:09:50.971450 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:09:50.971456 | orchestrator | 2026-02-16 04:09:50.971463 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-16 04:09:50.971470 | orchestrator | Monday 16 February 2026 04:08:44 +0000 (0:00:30.002) 0:07:22.495 ******* 2026-02-16 04:09:50.971477 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:09:50.971483 | orchestrator | 2026-02-16 04:09:50.971490 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-16 04:09:50.971497 | orchestrator | Monday 16 February 2026 04:08:44 +0000 (0:00:00.133) 0:07:22.629 ******* 2026-02-16 04:09:50.971527 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:09:50.971534 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:09:50.971540 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:09:50.971548 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:09:50.971554 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:09:50.971561 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-16 04:09:50.971570 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-16 04:09:50.971577 | orchestrator | 2026-02-16 04:09:50.971584 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-16 04:09:50.971591 | orchestrator | Monday 16 February 2026 04:09:06 +0000 (0:00:21.993) 0:07:44.622 ******* 2026-02-16 04:09:50.971597 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:09:50.971604 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:09:50.971611 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:09:50.971618 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:09:50.971623 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:09:50.971630 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:09:50.971637 | orchestrator | 2026-02-16 04:09:50.971645 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-16 04:09:50.971651 | orchestrator | Monday 16 February 2026 04:09:15 +0000 (0:00:08.728) 0:07:53.350 ******* 2026-02-16 04:09:50.971659 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:09:50.971666 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:09:50.971672 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:09:50.971679 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:09:50.971686 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:09:50.971708 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-02-16 04:09:50.971715 | orchestrator | 2026-02-16 04:09:50.971722 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-16 04:09:50.971729 | orchestrator | Monday 16 February 2026 04:09:18 +0000 (0:00:03.529) 0:07:56.880 ******* 2026-02-16 04:09:50.971736 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-16 04:09:50.971744 | orchestrator | 2026-02-16 04:09:50.971750 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-16 04:09:50.971758 | orchestrator | Monday 16 February 2026 04:09:31 +0000 (0:00:12.843) 0:08:09.723 ******* 2026-02-16 04:09:50.971764 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-16 04:09:50.971772 | orchestrator | 2026-02-16 04:09:50.971779 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-16 04:09:50.971786 | orchestrator | Monday 16 February 2026 04:09:33 +0000 (0:00:01.388) 0:08:11.111 ******* 2026-02-16 04:09:50.971793 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:09:50.971800 | orchestrator | 2026-02-16 04:09:50.971807 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-16 04:09:50.971814 | orchestrator | Monday 16 February 2026 04:09:34 +0000 (0:00:01.652) 0:08:12.763 ******* 2026-02-16 04:09:50.971821 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-16 04:09:50.971828 | orchestrator | 2026-02-16 04:09:50.971835 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-16 04:09:50.971842 | orchestrator | Monday 16 February 2026 04:09:46 +0000 (0:00:11.691) 0:08:24.455 ******* 2026-02-16 04:09:50.971906 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:09:50.971917 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:09:50.971923 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:09:50.971950 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:09:50.971957 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:09:50.971963 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:09:50.971970 | orchestrator | 2026-02-16 04:09:50.971976 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-16 04:09:50.971990 | orchestrator | 2026-02-16 04:09:50.971997 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-16 04:09:50.972004 | orchestrator | Monday 16 February 2026 04:09:48 +0000 (0:00:01.743) 0:08:26.198 ******* 2026-02-16 04:09:50.972010 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:09:50.972017 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:09:50.972024 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:09:50.972030 | orchestrator | 2026-02-16 04:09:50.972037 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-16 04:09:50.972044 | orchestrator | 2026-02-16 04:09:50.972050 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-16 04:09:50.972056 | orchestrator | Monday 16 February 2026 04:09:49 +0000 (0:00:00.910) 0:08:27.108 ******* 2026-02-16 04:09:50.972062 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:09:50.972068 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:09:50.972074 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:09:50.972079 | orchestrator | 2026-02-16 04:09:50.972086 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-16 04:09:50.972092 | orchestrator | 2026-02-16 04:09:50.972101 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-16 04:09:50.972107 | orchestrator | Monday 16 February 2026 04:09:49 +0000 (0:00:00.665) 0:08:27.774 ******* 2026-02-16 04:09:50.972114 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-16 04:09:50.972121 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-16 04:09:50.972128 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-16 04:09:50.972135 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-16 04:09:50.972142 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-16 04:09:50.972149 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-16 04:09:50.972156 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:09:50.972163 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-16 04:09:50.972170 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-16 04:09:50.972176 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-16 04:09:50.972183 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-16 04:09:50.972190 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-16 04:09:50.972196 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-16 04:09:50.972203 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:09:50.972210 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-16 04:09:50.972217 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-16 04:09:50.972224 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-16 04:09:50.972230 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-16 04:09:50.972237 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-16 04:09:50.972243 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-16 04:09:50.972250 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:09:50.972257 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-16 04:09:50.972264 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-16 04:09:50.972270 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-16 04:09:50.972277 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-16 04:09:50.972283 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-16 04:09:50.972290 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-16 04:09:50.972297 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:09:50.972304 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-16 04:09:50.972317 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-16 04:09:50.972325 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-16 04:09:50.972332 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-16 04:09:50.972339 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-16 04:09:50.972346 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-16 04:09:50.972353 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:09:50.972360 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-16 04:09:50.972367 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-16 04:09:50.972373 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-16 04:09:50.972379 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-16 04:09:50.972386 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-16 04:09:50.972392 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-16 04:09:50.972398 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:09:50.972404 | orchestrator | 2026-02-16 04:09:50.972411 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-16 04:09:50.972417 | orchestrator | 2026-02-16 04:09:50.972424 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-16 04:09:50.972431 | orchestrator | Monday 16 February 2026 04:09:50 +0000 (0:00:01.111) 0:08:28.886 ******* 2026-02-16 04:09:50.972437 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-16 04:09:50.972444 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-16 04:09:50.972450 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:09:50.972465 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-16 04:09:52.658572 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-16 04:09:52.658679 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:09:52.658694 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-16 04:09:52.658705 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-16 04:09:52.658717 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:09:52.658728 | orchestrator | 2026-02-16 04:09:52.658741 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-16 04:09:52.658753 | orchestrator | 2026-02-16 04:09:52.658765 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-16 04:09:52.658777 | orchestrator | Monday 16 February 2026 04:09:51 +0000 (0:00:00.472) 0:08:29.358 ******* 2026-02-16 04:09:52.658788 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:09:52.658806 | orchestrator | 2026-02-16 04:09:52.658824 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-16 04:09:52.658914 | orchestrator | 2026-02-16 04:09:52.658935 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-16 04:09:52.658953 | orchestrator | Monday 16 February 2026 04:09:51 +0000 (0:00:00.727) 0:08:30.086 ******* 2026-02-16 04:09:52.658970 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:09:52.658988 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:09:52.659007 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:09:52.659025 | orchestrator | 2026-02-16 04:09:52.659044 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:09:52.659062 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 04:09:52.659084 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-02-16 04:09:52.659102 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-16 04:09:52.659158 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-16 04:09:52.659173 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-16 04:09:52.659186 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-16 04:09:52.659199 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-16 04:09:52.659211 | orchestrator | 2026-02-16 04:09:52.659224 | orchestrator | 2026-02-16 04:09:52.659287 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:09:52.659301 | orchestrator | Monday 16 February 2026 04:09:52 +0000 (0:00:00.412) 0:08:30.498 ******* 2026-02-16 04:09:52.659314 | orchestrator | =============================================================================== 2026-02-16 04:09:52.659326 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 37.47s 2026-02-16 04:09:52.659339 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 32.52s 2026-02-16 04:09:52.659352 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 30.00s 2026-02-16 04:09:52.659365 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 25.22s 2026-02-16 04:09:52.659378 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.99s 2026-02-16 04:09:52.659391 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.76s 2026-02-16 04:09:52.659403 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 18.98s 2026-02-16 04:09:52.659420 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 17.67s 2026-02-16 04:09:52.659434 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.59s 2026-02-16 04:09:52.659447 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.39s 2026-02-16 04:09:52.659459 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.84s 2026-02-16 04:09:52.659472 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.32s 2026-02-16 04:09:52.659483 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.11s 2026-02-16 04:09:52.659494 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.69s 2026-02-16 04:09:52.659504 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.53s 2026-02-16 04:09:52.659515 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.20s 2026-02-16 04:09:52.659540 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.73s 2026-02-16 04:09:52.659551 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.74s 2026-02-16 04:09:52.659562 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.31s 2026-02-16 04:09:52.659573 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 6.77s 2026-02-16 04:09:54.579185 | orchestrator | 2026-02-16 04:09:54 | INFO  | Task 96b8f5de-66f5-4c67-bfa5-0f1c29fe068d (horizon) was prepared for execution. 2026-02-16 04:09:54.579276 | orchestrator | 2026-02-16 04:09:54 | INFO  | It takes a moment until task 96b8f5de-66f5-4c67-bfa5-0f1c29fe068d (horizon) has been started and output is visible here. 2026-02-16 04:10:01.611573 | orchestrator | 2026-02-16 04:10:01.611672 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 04:10:01.611685 | orchestrator | 2026-02-16 04:10:01.611695 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 04:10:01.611704 | orchestrator | Monday 16 February 2026 04:09:58 +0000 (0:00:00.260) 0:00:00.260 ******* 2026-02-16 04:10:01.611735 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:10:01.611745 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:10:01.611754 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:10:01.611762 | orchestrator | 2026-02-16 04:10:01.611771 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 04:10:01.611780 | orchestrator | Monday 16 February 2026 04:09:58 +0000 (0:00:00.309) 0:00:00.570 ******* 2026-02-16 04:10:01.611789 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-16 04:10:01.611798 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-16 04:10:01.611807 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-16 04:10:01.611815 | orchestrator | 2026-02-16 04:10:01.611825 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-16 04:10:01.611860 | orchestrator | 2026-02-16 04:10:01.611869 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-16 04:10:01.611878 | orchestrator | Monday 16 February 2026 04:09:59 +0000 (0:00:00.436) 0:00:01.006 ******* 2026-02-16 04:10:01.611888 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:10:01.611897 | orchestrator | 2026-02-16 04:10:01.611906 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-16 04:10:01.611915 | orchestrator | Monday 16 February 2026 04:09:59 +0000 (0:00:00.541) 0:00:01.547 ******* 2026-02-16 04:10:01.611949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-16 04:10:01.611981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-16 04:10:01.612009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-16 04:10:01.612020 | orchestrator | 2026-02-16 04:10:01.612029 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-16 04:10:01.612044 | orchestrator | Monday 16 February 2026 04:10:01 +0000 (0:00:01.131) 0:00:02.679 ******* 2026-02-16 04:10:01.612053 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:10:01.612062 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:10:01.612070 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:10:01.612079 | orchestrator | 2026-02-16 04:10:01.612088 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-16 04:10:01.612097 | orchestrator | Monday 16 February 2026 04:10:01 +0000 (0:00:00.462) 0:00:03.142 ******* 2026-02-16 04:10:01.612111 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-16 04:10:07.534359 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-16 04:10:07.534486 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-16 04:10:07.534502 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-16 04:10:07.534514 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-16 04:10:07.534524 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-16 04:10:07.534536 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-16 04:10:07.534547 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-16 04:10:07.534558 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-16 04:10:07.534569 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-16 04:10:07.534579 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-16 04:10:07.534590 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-16 04:10:07.534601 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-16 04:10:07.534611 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-16 04:10:07.534622 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-16 04:10:07.534633 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-16 04:10:07.534643 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-16 04:10:07.534654 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-16 04:10:07.534665 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-16 04:10:07.534675 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-16 04:10:07.534686 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-16 04:10:07.534696 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-16 04:10:07.534707 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-16 04:10:07.534718 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-16 04:10:07.534730 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-16 04:10:07.534743 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-16 04:10:07.534754 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-16 04:10:07.534765 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-16 04:10:07.534815 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-16 04:10:07.534864 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-16 04:10:07.534875 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-16 04:10:07.534886 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-16 04:10:07.534898 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-16 04:10:07.534913 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-16 04:10:07.534926 | orchestrator | 2026-02-16 04:10:07.534939 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-16 04:10:07.534953 | orchestrator | Monday 16 February 2026 04:10:02 +0000 (0:00:00.730) 0:00:03.873 ******* 2026-02-16 04:10:07.534965 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:10:07.534979 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:10:07.534991 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:10:07.535004 | orchestrator | 2026-02-16 04:10:07.535016 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-16 04:10:07.535029 | orchestrator | Monday 16 February 2026 04:10:02 +0000 (0:00:00.318) 0:00:04.191 ******* 2026-02-16 04:10:07.535041 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:07.535054 | orchestrator | 2026-02-16 04:10:07.535083 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-16 04:10:07.535096 | orchestrator | Monday 16 February 2026 04:10:02 +0000 (0:00:00.299) 0:00:04.491 ******* 2026-02-16 04:10:07.535108 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:07.535121 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:10:07.535133 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:10:07.535146 | orchestrator | 2026-02-16 04:10:07.535158 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-16 04:10:07.535170 | orchestrator | Monday 16 February 2026 04:10:03 +0000 (0:00:00.308) 0:00:04.800 ******* 2026-02-16 04:10:07.535183 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:10:07.535195 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:10:07.535208 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:10:07.535220 | orchestrator | 2026-02-16 04:10:07.535233 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-16 04:10:07.535246 | orchestrator | Monday 16 February 2026 04:10:03 +0000 (0:00:00.319) 0:00:05.119 ******* 2026-02-16 04:10:07.535259 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:07.535271 | orchestrator | 2026-02-16 04:10:07.535281 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-16 04:10:07.535292 | orchestrator | Monday 16 February 2026 04:10:03 +0000 (0:00:00.143) 0:00:05.262 ******* 2026-02-16 04:10:07.535303 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:07.535315 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:10:07.535325 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:10:07.535336 | orchestrator | 2026-02-16 04:10:07.535347 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-16 04:10:07.535358 | orchestrator | Monday 16 February 2026 04:10:03 +0000 (0:00:00.294) 0:00:05.557 ******* 2026-02-16 04:10:07.535368 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:10:07.535379 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:10:07.535389 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:10:07.535400 | orchestrator | 2026-02-16 04:10:07.535411 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-16 04:10:07.535430 | orchestrator | Monday 16 February 2026 04:10:04 +0000 (0:00:00.533) 0:00:06.090 ******* 2026-02-16 04:10:07.535441 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:07.535452 | orchestrator | 2026-02-16 04:10:07.535463 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-16 04:10:07.535473 | orchestrator | Monday 16 February 2026 04:10:04 +0000 (0:00:00.142) 0:00:06.232 ******* 2026-02-16 04:10:07.535484 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:07.535495 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:10:07.535505 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:10:07.535516 | orchestrator | 2026-02-16 04:10:07.535527 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-16 04:10:07.535537 | orchestrator | Monday 16 February 2026 04:10:04 +0000 (0:00:00.309) 0:00:06.542 ******* 2026-02-16 04:10:07.535548 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:10:07.535559 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:10:07.535569 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:10:07.535580 | orchestrator | 2026-02-16 04:10:07.535591 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-16 04:10:07.535602 | orchestrator | Monday 16 February 2026 04:10:05 +0000 (0:00:00.315) 0:00:06.858 ******* 2026-02-16 04:10:07.535612 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:07.535623 | orchestrator | 2026-02-16 04:10:07.535634 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-16 04:10:07.535645 | orchestrator | Monday 16 February 2026 04:10:05 +0000 (0:00:00.129) 0:00:06.988 ******* 2026-02-16 04:10:07.535655 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:07.535666 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:10:07.535677 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:10:07.535688 | orchestrator | 2026-02-16 04:10:07.535698 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-16 04:10:07.535709 | orchestrator | Monday 16 February 2026 04:10:05 +0000 (0:00:00.502) 0:00:07.490 ******* 2026-02-16 04:10:07.535720 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:10:07.535736 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:10:07.535747 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:10:07.535758 | orchestrator | 2026-02-16 04:10:07.535768 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-16 04:10:07.535779 | orchestrator | Monday 16 February 2026 04:10:06 +0000 (0:00:00.337) 0:00:07.828 ******* 2026-02-16 04:10:07.535790 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:07.535801 | orchestrator | 2026-02-16 04:10:07.535811 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-16 04:10:07.535850 | orchestrator | Monday 16 February 2026 04:10:06 +0000 (0:00:00.126) 0:00:07.955 ******* 2026-02-16 04:10:07.535870 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:07.535889 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:10:07.535909 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:10:07.535928 | orchestrator | 2026-02-16 04:10:07.535947 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-16 04:10:07.535959 | orchestrator | Monday 16 February 2026 04:10:06 +0000 (0:00:00.283) 0:00:08.238 ******* 2026-02-16 04:10:07.535969 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:10:07.535980 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:10:07.535991 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:10:07.536002 | orchestrator | 2026-02-16 04:10:07.536012 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-16 04:10:07.536023 | orchestrator | Monday 16 February 2026 04:10:06 +0000 (0:00:00.328) 0:00:08.567 ******* 2026-02-16 04:10:07.536034 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:07.536045 | orchestrator | 2026-02-16 04:10:07.536055 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-16 04:10:07.536066 | orchestrator | Monday 16 February 2026 04:10:07 +0000 (0:00:00.302) 0:00:08.870 ******* 2026-02-16 04:10:07.536085 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:07.536096 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:10:07.536106 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:10:07.536117 | orchestrator | 2026-02-16 04:10:07.536128 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-16 04:10:07.536146 | orchestrator | Monday 16 February 2026 04:10:07 +0000 (0:00:00.304) 0:00:09.174 ******* 2026-02-16 04:10:21.388028 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:10:21.388123 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:10:21.388141 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:10:21.388159 | orchestrator | 2026-02-16 04:10:21.388174 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-16 04:10:21.388187 | orchestrator | Monday 16 February 2026 04:10:07 +0000 (0:00:00.306) 0:00:09.481 ******* 2026-02-16 04:10:21.388201 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:21.388215 | orchestrator | 2026-02-16 04:10:21.388227 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-16 04:10:21.388242 | orchestrator | Monday 16 February 2026 04:10:07 +0000 (0:00:00.132) 0:00:09.614 ******* 2026-02-16 04:10:21.388251 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:21.388258 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:10:21.388266 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:10:21.388273 | orchestrator | 2026-02-16 04:10:21.388280 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-16 04:10:21.388289 | orchestrator | Monday 16 February 2026 04:10:08 +0000 (0:00:00.307) 0:00:09.921 ******* 2026-02-16 04:10:21.388296 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:10:21.388303 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:10:21.388310 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:10:21.388318 | orchestrator | 2026-02-16 04:10:21.388325 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-16 04:10:21.388332 | orchestrator | Monday 16 February 2026 04:10:08 +0000 (0:00:00.495) 0:00:10.417 ******* 2026-02-16 04:10:21.388339 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:21.388347 | orchestrator | 2026-02-16 04:10:21.388354 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-16 04:10:21.388361 | orchestrator | Monday 16 February 2026 04:10:08 +0000 (0:00:00.149) 0:00:10.566 ******* 2026-02-16 04:10:21.388368 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:21.388375 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:10:21.388383 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:10:21.388390 | orchestrator | 2026-02-16 04:10:21.388397 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-16 04:10:21.388404 | orchestrator | Monday 16 February 2026 04:10:09 +0000 (0:00:00.302) 0:00:10.869 ******* 2026-02-16 04:10:21.388411 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:10:21.388418 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:10:21.388425 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:10:21.388433 | orchestrator | 2026-02-16 04:10:21.388440 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-16 04:10:21.388447 | orchestrator | Monday 16 February 2026 04:10:09 +0000 (0:00:00.320) 0:00:11.190 ******* 2026-02-16 04:10:21.388454 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:21.388461 | orchestrator | 2026-02-16 04:10:21.388469 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-16 04:10:21.388476 | orchestrator | Monday 16 February 2026 04:10:09 +0000 (0:00:00.132) 0:00:11.322 ******* 2026-02-16 04:10:21.388483 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:21.388490 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:10:21.388497 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:10:21.388504 | orchestrator | 2026-02-16 04:10:21.388511 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-16 04:10:21.388518 | orchestrator | Monday 16 February 2026 04:10:10 +0000 (0:00:00.521) 0:00:11.844 ******* 2026-02-16 04:10:21.388526 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:10:21.388555 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:10:21.388563 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:10:21.388570 | orchestrator | 2026-02-16 04:10:21.388582 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-16 04:10:21.388603 | orchestrator | Monday 16 February 2026 04:10:10 +0000 (0:00:00.327) 0:00:12.172 ******* 2026-02-16 04:10:21.388615 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:21.388627 | orchestrator | 2026-02-16 04:10:21.388638 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-16 04:10:21.388650 | orchestrator | Monday 16 February 2026 04:10:10 +0000 (0:00:00.129) 0:00:12.301 ******* 2026-02-16 04:10:21.388686 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:21.388699 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:10:21.388712 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:10:21.388726 | orchestrator | 2026-02-16 04:10:21.388739 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-16 04:10:21.388752 | orchestrator | Monday 16 February 2026 04:10:10 +0000 (0:00:00.296) 0:00:12.598 ******* 2026-02-16 04:10:21.388764 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:10:21.388776 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:10:21.388789 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:10:21.388851 | orchestrator | 2026-02-16 04:10:21.388863 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-16 04:10:21.388875 | orchestrator | Monday 16 February 2026 04:10:12 +0000 (0:00:01.878) 0:00:14.476 ******* 2026-02-16 04:10:21.388888 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-16 04:10:21.388901 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-16 04:10:21.388914 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-16 04:10:21.388926 | orchestrator | 2026-02-16 04:10:21.388939 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-16 04:10:21.388952 | orchestrator | Monday 16 February 2026 04:10:14 +0000 (0:00:01.899) 0:00:16.376 ******* 2026-02-16 04:10:21.388965 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-16 04:10:21.388979 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-16 04:10:21.388991 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-16 04:10:21.389004 | orchestrator | 2026-02-16 04:10:21.389016 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-16 04:10:21.389051 | orchestrator | Monday 16 February 2026 04:10:16 +0000 (0:00:01.767) 0:00:18.143 ******* 2026-02-16 04:10:21.389064 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-16 04:10:21.389077 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-16 04:10:21.389090 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-16 04:10:21.389103 | orchestrator | 2026-02-16 04:10:21.389116 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-16 04:10:21.389130 | orchestrator | Monday 16 February 2026 04:10:18 +0000 (0:00:01.576) 0:00:19.720 ******* 2026-02-16 04:10:21.389142 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:21.389155 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:10:21.389167 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:10:21.389180 | orchestrator | 2026-02-16 04:10:21.389192 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-16 04:10:21.389206 | orchestrator | Monday 16 February 2026 04:10:18 +0000 (0:00:00.491) 0:00:20.212 ******* 2026-02-16 04:10:21.389218 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:21.389231 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:10:21.389260 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:10:21.389272 | orchestrator | 2026-02-16 04:10:21.389285 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-16 04:10:21.389299 | orchestrator | Monday 16 February 2026 04:10:18 +0000 (0:00:00.327) 0:00:20.539 ******* 2026-02-16 04:10:21.389312 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:10:21.389325 | orchestrator | 2026-02-16 04:10:21.389339 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-16 04:10:21.389352 | orchestrator | Monday 16 February 2026 04:10:19 +0000 (0:00:00.602) 0:00:21.142 ******* 2026-02-16 04:10:21.389434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-16 04:10:21.389476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-16 04:10:22.026992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-16 04:10:22.027135 | orchestrator | 2026-02-16 04:10:22.027161 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-16 04:10:22.027182 | orchestrator | Monday 16 February 2026 04:10:21 +0000 (0:00:01.877) 0:00:23.019 ******* 2026-02-16 04:10:22.027234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-16 04:10:22.027292 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:22.027325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-16 04:10:22.027356 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:10:22.027429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-16 04:10:24.503006 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:10:24.503127 | orchestrator | 2026-02-16 04:10:24.503143 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-16 04:10:24.503156 | orchestrator | Monday 16 February 2026 04:10:22 +0000 (0:00:00.645) 0:00:23.665 ******* 2026-02-16 04:10:24.503196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-16 04:10:24.503249 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:10:24.503300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-16 04:10:24.503321 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:10:24.503339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-16 04:10:24.503377 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:10:24.503395 | orchestrator | 2026-02-16 04:10:24.503413 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-16 04:10:24.503430 | orchestrator | Monday 16 February 2026 04:10:22 +0000 (0:00:00.857) 0:00:24.523 ******* 2026-02-16 04:10:24.503469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-16 04:11:12.395015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-16 04:11:12.395134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-16 04:11:12.395143 | orchestrator | 2026-02-16 04:11:12.395149 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-16 04:11:12.395154 | orchestrator | Monday 16 February 2026 04:10:24 +0000 (0:00:01.621) 0:00:26.144 ******* 2026-02-16 04:11:12.395159 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:11:12.395164 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:11:12.395176 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:11:12.395180 | orchestrator | 2026-02-16 04:11:12.395185 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-16 04:11:12.395197 | orchestrator | Monday 16 February 2026 04:10:24 +0000 (0:00:00.300) 0:00:26.444 ******* 2026-02-16 04:11:12.395202 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:11:12.395206 | orchestrator | 2026-02-16 04:11:12.395210 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-16 04:11:12.395214 | orchestrator | Monday 16 February 2026 04:10:25 +0000 (0:00:00.534) 0:00:26.979 ******* 2026-02-16 04:11:12.395218 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:11:12.395222 | orchestrator | 2026-02-16 04:11:12.395226 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-16 04:11:12.395230 | orchestrator | Monday 16 February 2026 04:10:27 +0000 (0:00:02.195) 0:00:29.175 ******* 2026-02-16 04:11:12.395234 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:11:12.395238 | orchestrator | 2026-02-16 04:11:12.395242 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-16 04:11:12.395246 | orchestrator | Monday 16 February 2026 04:10:30 +0000 (0:00:02.570) 0:00:31.745 ******* 2026-02-16 04:11:12.395250 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:11:12.395254 | orchestrator | 2026-02-16 04:11:12.395258 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-16 04:11:12.395262 | orchestrator | Monday 16 February 2026 04:10:46 +0000 (0:00:16.191) 0:00:47.937 ******* 2026-02-16 04:11:12.395266 | orchestrator | 2026-02-16 04:11:12.395270 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-16 04:11:12.395273 | orchestrator | Monday 16 February 2026 04:10:46 +0000 (0:00:00.065) 0:00:48.002 ******* 2026-02-16 04:11:12.395277 | orchestrator | 2026-02-16 04:11:12.395281 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-16 04:11:12.395285 | orchestrator | Monday 16 February 2026 04:10:46 +0000 (0:00:00.069) 0:00:48.072 ******* 2026-02-16 04:11:12.395289 | orchestrator | 2026-02-16 04:11:12.395293 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-16 04:11:12.395297 | orchestrator | Monday 16 February 2026 04:10:46 +0000 (0:00:00.072) 0:00:48.144 ******* 2026-02-16 04:11:12.395301 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:11:12.395305 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:11:12.395309 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:11:12.395313 | orchestrator | 2026-02-16 04:11:12.395317 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:11:12.395322 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-16 04:11:12.395327 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-16 04:11:12.395331 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-16 04:11:12.395335 | orchestrator | 2026-02-16 04:11:12.395339 | orchestrator | 2026-02-16 04:11:12.395343 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:11:12.395347 | orchestrator | Monday 16 February 2026 04:11:12 +0000 (0:00:25.874) 0:01:14.018 ******* 2026-02-16 04:11:12.395351 | orchestrator | =============================================================================== 2026-02-16 04:11:12.395355 | orchestrator | horizon : Restart horizon container ------------------------------------ 25.87s 2026-02-16 04:11:12.395359 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.19s 2026-02-16 04:11:12.395362 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.57s 2026-02-16 04:11:12.395371 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.20s 2026-02-16 04:11:12.395375 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.90s 2026-02-16 04:11:12.395382 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.88s 2026-02-16 04:11:12.395386 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.88s 2026-02-16 04:11:12.395390 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.77s 2026-02-16 04:11:12.395394 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.62s 2026-02-16 04:11:12.395398 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.58s 2026-02-16 04:11:12.395402 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.13s 2026-02-16 04:11:12.395406 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.86s 2026-02-16 04:11:12.395410 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2026-02-16 04:11:12.395418 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.65s 2026-02-16 04:11:12.755774 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.60s 2026-02-16 04:11:12.755877 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2026-02-16 04:11:12.755892 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.53s 2026-02-16 04:11:12.755904 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2026-02-16 04:11:12.755916 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.52s 2026-02-16 04:11:12.755928 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.50s 2026-02-16 04:11:15.090276 | orchestrator | 2026-02-16 04:11:15 | INFO  | Task 4e3fbdec-66ad-44d3-9dbc-e85ba2eccc46 (skyline) was prepared for execution. 2026-02-16 04:11:15.090381 | orchestrator | 2026-02-16 04:11:15 | INFO  | It takes a moment until task 4e3fbdec-66ad-44d3-9dbc-e85ba2eccc46 (skyline) has been started and output is visible here. 2026-02-16 04:11:46.280824 | orchestrator | 2026-02-16 04:11:46.280942 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 04:11:46.280959 | orchestrator | 2026-02-16 04:11:46.280971 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 04:11:46.280983 | orchestrator | Monday 16 February 2026 04:11:19 +0000 (0:00:00.257) 0:00:00.257 ******* 2026-02-16 04:11:46.280994 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:11:46.281006 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:11:46.281017 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:11:46.281028 | orchestrator | 2026-02-16 04:11:46.281039 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 04:11:46.281050 | orchestrator | Monday 16 February 2026 04:11:19 +0000 (0:00:00.288) 0:00:00.545 ******* 2026-02-16 04:11:46.281061 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-02-16 04:11:46.281073 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-02-16 04:11:46.281083 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-02-16 04:11:46.281094 | orchestrator | 2026-02-16 04:11:46.281105 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-02-16 04:11:46.281116 | orchestrator | 2026-02-16 04:11:46.281127 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-16 04:11:46.281138 | orchestrator | Monday 16 February 2026 04:11:19 +0000 (0:00:00.432) 0:00:00.978 ******* 2026-02-16 04:11:46.281149 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:11:46.281161 | orchestrator | 2026-02-16 04:11:46.281172 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-02-16 04:11:46.281183 | orchestrator | Monday 16 February 2026 04:11:20 +0000 (0:00:00.516) 0:00:01.494 ******* 2026-02-16 04:11:46.281217 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-02-16 04:11:46.281229 | orchestrator | 2026-02-16 04:11:46.281240 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-02-16 04:11:46.281251 | orchestrator | Monday 16 February 2026 04:11:23 +0000 (0:00:03.447) 0:00:04.942 ******* 2026-02-16 04:11:46.281262 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-02-16 04:11:46.281273 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-02-16 04:11:46.281283 | orchestrator | 2026-02-16 04:11:46.281294 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-02-16 04:11:46.281305 | orchestrator | Monday 16 February 2026 04:11:30 +0000 (0:00:06.858) 0:00:11.800 ******* 2026-02-16 04:11:46.281316 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-16 04:11:46.281327 | orchestrator | 2026-02-16 04:11:46.281338 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-02-16 04:11:46.281349 | orchestrator | Monday 16 February 2026 04:11:33 +0000 (0:00:03.225) 0:00:15.025 ******* 2026-02-16 04:11:46.281361 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-16 04:11:46.281374 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-02-16 04:11:46.281386 | orchestrator | 2026-02-16 04:11:46.281398 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-02-16 04:11:46.281410 | orchestrator | Monday 16 February 2026 04:11:37 +0000 (0:00:03.950) 0:00:18.976 ******* 2026-02-16 04:11:46.281423 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-16 04:11:46.281435 | orchestrator | 2026-02-16 04:11:46.281447 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-02-16 04:11:46.281459 | orchestrator | Monday 16 February 2026 04:11:41 +0000 (0:00:03.219) 0:00:22.195 ******* 2026-02-16 04:11:46.281471 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-02-16 04:11:46.281483 | orchestrator | 2026-02-16 04:11:46.281510 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-02-16 04:11:46.281523 | orchestrator | Monday 16 February 2026 04:11:44 +0000 (0:00:03.842) 0:00:26.038 ******* 2026-02-16 04:11:46.281539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-16 04:11:46.281575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-16 04:11:46.281606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-16 04:11:46.281620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-16 04:11:46.281640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-16 04:11:46.281722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-16 04:11:50.118390 | orchestrator | 2026-02-16 04:11:50.118508 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-16 04:11:50.118564 | orchestrator | Monday 16 February 2026 04:11:46 +0000 (0:00:01.303) 0:00:27.341 ******* 2026-02-16 04:11:50.118584 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:11:50.118602 | orchestrator | 2026-02-16 04:11:50.118617 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-02-16 04:11:50.118632 | orchestrator | Monday 16 February 2026 04:11:46 +0000 (0:00:00.713) 0:00:28.055 ******* 2026-02-16 04:11:50.118719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-16 04:11:50.118744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-16 04:11:50.118782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-16 04:11:50.118826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-16 04:11:50.118864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-16 04:11:50.118886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-16 04:11:50.118905 | orchestrator | 2026-02-16 04:11:50.118926 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-02-16 04:11:50.118946 | orchestrator | Monday 16 February 2026 04:11:49 +0000 (0:00:02.555) 0:00:30.611 ******* 2026-02-16 04:11:50.118975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-16 04:11:50.118992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-16 04:11:50.119014 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:11:50.119038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-16 04:11:51.369970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-16 04:11:51.370140 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:11:51.370176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-16 04:11:51.370192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-16 04:11:51.370226 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:11:51.370239 | orchestrator | 2026-02-16 04:11:51.370252 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-02-16 04:11:51.370264 | orchestrator | Monday 16 February 2026 04:11:50 +0000 (0:00:00.577) 0:00:31.188 ******* 2026-02-16 04:11:51.370276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-16 04:11:51.370307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-16 04:11:51.370320 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:11:51.370337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-16 04:11:51.370349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-16 04:11:51.370368 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:11:51.370380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-16 04:11:51.370400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-16 04:11:59.697654 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:11:59.697754 | orchestrator | 2026-02-16 04:11:59.697771 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-02-16 04:11:59.697784 | orchestrator | Monday 16 February 2026 04:11:51 +0000 (0:00:01.245) 0:00:32.433 ******* 2026-02-16 04:11:59.697812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-16 04:11:59.697827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-16 04:11:59.697861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-16 04:11:59.697874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-16 04:11:59.697905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-16 04:11:59.697922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-16 04:11:59.697958 | orchestrator | 2026-02-16 04:11:59.697980 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-02-16 04:11:59.697992 | orchestrator | Monday 16 February 2026 04:11:53 +0000 (0:00:02.441) 0:00:34.875 ******* 2026-02-16 04:11:59.698003 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-16 04:11:59.698014 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-16 04:11:59.698073 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-16 04:11:59.698084 | orchestrator | 2026-02-16 04:11:59.698095 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-02-16 04:11:59.698106 | orchestrator | Monday 16 February 2026 04:11:55 +0000 (0:00:01.548) 0:00:36.423 ******* 2026-02-16 04:11:59.698117 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-16 04:11:59.698127 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-16 04:11:59.698138 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-16 04:11:59.698149 | orchestrator | 2026-02-16 04:11:59.698159 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-02-16 04:11:59.698184 | orchestrator | Monday 16 February 2026 04:11:57 +0000 (0:00:02.059) 0:00:38.483 ******* 2026-02-16 04:11:59.698198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-16 04:11:59.698222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-16 04:12:01.730266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-16 04:12:01.730461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-16 04:12:01.730513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-16 04:12:01.730589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-16 04:12:01.730613 | orchestrator | 2026-02-16 04:12:01.730668 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-02-16 04:12:01.730690 | orchestrator | Monday 16 February 2026 04:11:59 +0000 (0:00:02.286) 0:00:40.769 ******* 2026-02-16 04:12:01.730710 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:12:01.730731 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:12:01.730752 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:12:01.730771 | orchestrator | 2026-02-16 04:12:01.730829 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-02-16 04:12:01.730845 | orchestrator | Monday 16 February 2026 04:12:00 +0000 (0:00:00.329) 0:00:41.099 ******* 2026-02-16 04:12:01.730868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-16 04:12:01.730885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-16 04:12:01.730898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-16 04:12:01.730911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-16 04:12:01.730949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-16 04:12:39.430664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-16 04:12:39.430793 | orchestrator | 2026-02-16 04:12:39.430816 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-02-16 04:12:39.430838 | orchestrator | Monday 16 February 2026 04:12:01 +0000 (0:00:01.695) 0:00:42.794 ******* 2026-02-16 04:12:39.430857 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:12:39.430878 | orchestrator | 2026-02-16 04:12:39.430895 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-02-16 04:12:39.430913 | orchestrator | Monday 16 February 2026 04:12:03 +0000 (0:00:02.188) 0:00:44.982 ******* 2026-02-16 04:12:39.430930 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:12:39.430949 | orchestrator | 2026-02-16 04:12:39.430968 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-02-16 04:12:39.430987 | orchestrator | Monday 16 February 2026 04:12:06 +0000 (0:00:02.274) 0:00:47.257 ******* 2026-02-16 04:12:39.431007 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:12:39.431026 | orchestrator | 2026-02-16 04:12:39.431044 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-16 04:12:39.431064 | orchestrator | Monday 16 February 2026 04:12:14 +0000 (0:00:07.868) 0:00:55.126 ******* 2026-02-16 04:12:39.431075 | orchestrator | 2026-02-16 04:12:39.431086 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-16 04:12:39.431097 | orchestrator | Monday 16 February 2026 04:12:14 +0000 (0:00:00.069) 0:00:55.196 ******* 2026-02-16 04:12:39.431116 | orchestrator | 2026-02-16 04:12:39.431134 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-16 04:12:39.431153 | orchestrator | Monday 16 February 2026 04:12:14 +0000 (0:00:00.069) 0:00:55.265 ******* 2026-02-16 04:12:39.431172 | orchestrator | 2026-02-16 04:12:39.431192 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-02-16 04:12:39.431211 | orchestrator | Monday 16 February 2026 04:12:14 +0000 (0:00:00.070) 0:00:55.335 ******* 2026-02-16 04:12:39.431231 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:12:39.431251 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:12:39.431272 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:12:39.431290 | orchestrator | 2026-02-16 04:12:39.431339 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-02-16 04:12:39.431352 | orchestrator | Monday 16 February 2026 04:12:25 +0000 (0:00:11.133) 0:01:06.468 ******* 2026-02-16 04:12:39.431362 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:12:39.431373 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:12:39.431385 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:12:39.431395 | orchestrator | 2026-02-16 04:12:39.431406 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:12:39.431419 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-16 04:12:39.431431 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-16 04:12:39.431442 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-16 04:12:39.431453 | orchestrator | 2026-02-16 04:12:39.431464 | orchestrator | 2026-02-16 04:12:39.431475 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:12:39.431486 | orchestrator | Monday 16 February 2026 04:12:39 +0000 (0:00:13.722) 0:01:20.191 ******* 2026-02-16 04:12:39.431497 | orchestrator | =============================================================================== 2026-02-16 04:12:39.431507 | orchestrator | skyline : Restart skyline-console container ---------------------------- 13.72s 2026-02-16 04:12:39.431518 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 11.13s 2026-02-16 04:12:39.431529 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.87s 2026-02-16 04:12:39.431540 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.86s 2026-02-16 04:12:39.431608 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 3.95s 2026-02-16 04:12:39.431625 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.84s 2026-02-16 04:12:39.431636 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.45s 2026-02-16 04:12:39.431647 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.23s 2026-02-16 04:12:39.431680 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.22s 2026-02-16 04:12:39.431692 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.56s 2026-02-16 04:12:39.431703 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.44s 2026-02-16 04:12:39.431714 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.29s 2026-02-16 04:12:39.431725 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.27s 2026-02-16 04:12:39.431736 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.19s 2026-02-16 04:12:39.431747 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.06s 2026-02-16 04:12:39.431757 | orchestrator | skyline : Check skyline container --------------------------------------- 1.70s 2026-02-16 04:12:39.431768 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.55s 2026-02-16 04:12:39.431779 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.30s 2026-02-16 04:12:39.431794 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.25s 2026-02-16 04:12:39.431813 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.71s 2026-02-16 04:12:41.760130 | orchestrator | 2026-02-16 04:12:41 | INFO  | Task e9346962-0fb3-42e7-8c68-c2cb0aabacb9 (glance) was prepared for execution. 2026-02-16 04:12:41.760249 | orchestrator | 2026-02-16 04:12:41 | INFO  | It takes a moment until task e9346962-0fb3-42e7-8c68-c2cb0aabacb9 (glance) has been started and output is visible here. 2026-02-16 04:13:15.246269 | orchestrator | 2026-02-16 04:13:15.246404 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 04:13:15.246439 | orchestrator | 2026-02-16 04:13:15.246460 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 04:13:15.246479 | orchestrator | Monday 16 February 2026 04:12:45 +0000 (0:00:00.261) 0:00:00.261 ******* 2026-02-16 04:13:15.246498 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:13:15.246548 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:13:15.246565 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:13:15.246582 | orchestrator | 2026-02-16 04:13:15.246599 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 04:13:15.246618 | orchestrator | Monday 16 February 2026 04:12:46 +0000 (0:00:00.297) 0:00:00.559 ******* 2026-02-16 04:13:15.246635 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-16 04:13:15.246653 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-16 04:13:15.246668 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-16 04:13:15.246684 | orchestrator | 2026-02-16 04:13:15.246701 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-16 04:13:15.246719 | orchestrator | 2026-02-16 04:13:15.246735 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-16 04:13:15.246753 | orchestrator | Monday 16 February 2026 04:12:46 +0000 (0:00:00.427) 0:00:00.986 ******* 2026-02-16 04:13:15.246772 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:13:15.246791 | orchestrator | 2026-02-16 04:13:15.246808 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-16 04:13:15.246826 | orchestrator | Monday 16 February 2026 04:12:47 +0000 (0:00:00.529) 0:00:01.516 ******* 2026-02-16 04:13:15.246845 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-16 04:13:15.246863 | orchestrator | 2026-02-16 04:13:15.246882 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-16 04:13:15.246900 | orchestrator | Monday 16 February 2026 04:12:50 +0000 (0:00:03.397) 0:00:04.913 ******* 2026-02-16 04:13:15.246920 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-16 04:13:15.246939 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-16 04:13:15.246957 | orchestrator | 2026-02-16 04:13:15.246975 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-16 04:13:15.246994 | orchestrator | Monday 16 February 2026 04:12:57 +0000 (0:00:06.484) 0:00:11.398 ******* 2026-02-16 04:13:15.247014 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-16 04:13:15.247035 | orchestrator | 2026-02-16 04:13:15.247054 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-16 04:13:15.247072 | orchestrator | Monday 16 February 2026 04:13:00 +0000 (0:00:03.138) 0:00:14.536 ******* 2026-02-16 04:13:15.247091 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-16 04:13:15.247111 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-16 04:13:15.247130 | orchestrator | 2026-02-16 04:13:15.247149 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-16 04:13:15.247168 | orchestrator | Monday 16 February 2026 04:13:04 +0000 (0:00:04.093) 0:00:18.629 ******* 2026-02-16 04:13:15.247186 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-16 04:13:15.247204 | orchestrator | 2026-02-16 04:13:15.247224 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-16 04:13:15.247264 | orchestrator | Monday 16 February 2026 04:13:07 +0000 (0:00:03.187) 0:00:21.817 ******* 2026-02-16 04:13:15.247280 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-16 04:13:15.247291 | orchestrator | 2026-02-16 04:13:15.247302 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-16 04:13:15.247314 | orchestrator | Monday 16 February 2026 04:13:11 +0000 (0:00:03.793) 0:00:25.611 ******* 2026-02-16 04:13:15.247392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-16 04:13:15.247410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-16 04:13:15.247430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-16 04:13:15.247451 | orchestrator | 2026-02-16 04:13:15.247463 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-16 04:13:15.247474 | orchestrator | Monday 16 February 2026 04:13:14 +0000 (0:00:03.303) 0:00:28.914 ******* 2026-02-16 04:13:15.247486 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:13:15.247498 | orchestrator | 2026-02-16 04:13:15.247545 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-16 04:13:30.204356 | orchestrator | Monday 16 February 2026 04:13:15 +0000 (0:00:00.686) 0:00:29.600 ******* 2026-02-16 04:13:30.204564 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:13:30.204592 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:13:30.204605 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:13:30.204619 | orchestrator | 2026-02-16 04:13:30.204637 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-16 04:13:30.204652 | orchestrator | Monday 16 February 2026 04:13:18 +0000 (0:00:03.362) 0:00:32.962 ******* 2026-02-16 04:13:30.204666 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-16 04:13:30.204685 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-16 04:13:30.204698 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-16 04:13:30.204711 | orchestrator | 2026-02-16 04:13:30.204726 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-16 04:13:30.204743 | orchestrator | Monday 16 February 2026 04:13:20 +0000 (0:00:01.522) 0:00:34.484 ******* 2026-02-16 04:13:30.204760 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-16 04:13:30.204775 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-16 04:13:30.204790 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-16 04:13:30.204804 | orchestrator | 2026-02-16 04:13:30.204817 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-16 04:13:30.204831 | orchestrator | Monday 16 February 2026 04:13:21 +0000 (0:00:01.421) 0:00:35.906 ******* 2026-02-16 04:13:30.204845 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:13:30.204859 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:13:30.204872 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:13:30.204883 | orchestrator | 2026-02-16 04:13:30.204895 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-16 04:13:30.204907 | orchestrator | Monday 16 February 2026 04:13:22 +0000 (0:00:00.645) 0:00:36.552 ******* 2026-02-16 04:13:30.204919 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:13:30.204958 | orchestrator | 2026-02-16 04:13:30.204967 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-16 04:13:30.204974 | orchestrator | Monday 16 February 2026 04:13:22 +0000 (0:00:00.152) 0:00:36.704 ******* 2026-02-16 04:13:30.204982 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:13:30.204989 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:13:30.204997 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:13:30.205004 | orchestrator | 2026-02-16 04:13:30.205012 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-16 04:13:30.205020 | orchestrator | Monday 16 February 2026 04:13:22 +0000 (0:00:00.289) 0:00:36.994 ******* 2026-02-16 04:13:30.205027 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:13:30.205036 | orchestrator | 2026-02-16 04:13:30.205044 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-16 04:13:30.205051 | orchestrator | Monday 16 February 2026 04:13:23 +0000 (0:00:00.749) 0:00:37.743 ******* 2026-02-16 04:13:30.205078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-16 04:13:30.205117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-16 04:13:30.205147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-16 04:13:30.205160 | orchestrator | 2026-02-16 04:13:30.205173 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-16 04:13:30.205185 | orchestrator | Monday 16 February 2026 04:13:27 +0000 (0:00:03.825) 0:00:41.569 ******* 2026-02-16 04:13:30.205208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-16 04:13:33.545521 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:13:33.545648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-16 04:13:33.545670 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:13:33.545685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-16 04:13:33.545697 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:13:33.545709 | orchestrator | 2026-02-16 04:13:33.545721 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-16 04:13:33.545734 | orchestrator | Monday 16 February 2026 04:13:30 +0000 (0:00:02.993) 0:00:44.562 ******* 2026-02-16 04:13:33.545765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-16 04:13:33.545802 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:13:33.545821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-16 04:13:33.545833 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:13:33.545855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-16 04:14:04.585887 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:14:04.586007 | orchestrator | 2026-02-16 04:14:04.586086 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-16 04:14:04.586101 | orchestrator | Monday 16 February 2026 04:13:33 +0000 (0:00:03.341) 0:00:47.904 ******* 2026-02-16 04:14:04.586113 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:14:04.586124 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:14:04.586135 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:14:04.586147 | orchestrator | 2026-02-16 04:14:04.586158 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-16 04:14:04.586170 | orchestrator | Monday 16 February 2026 04:13:36 +0000 (0:00:03.088) 0:00:50.992 ******* 2026-02-16 04:14:04.586202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-16 04:14:04.586221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-16 04:14:04.586288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-16 04:14:04.586303 | orchestrator | 2026-02-16 04:14:04.586314 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-16 04:14:04.586326 | orchestrator | Monday 16 February 2026 04:13:40 +0000 (0:00:03.746) 0:00:54.739 ******* 2026-02-16 04:14:04.586337 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:14:04.586347 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:14:04.586358 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:14:04.586379 | orchestrator | 2026-02-16 04:14:04.586400 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-16 04:14:04.586420 | orchestrator | Monday 16 February 2026 04:13:45 +0000 (0:00:05.440) 0:01:00.179 ******* 2026-02-16 04:14:04.586471 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:14:04.586490 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:14:04.586506 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:14:04.586525 | orchestrator | 2026-02-16 04:14:04.586543 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-16 04:14:04.586563 | orchestrator | Monday 16 February 2026 04:13:48 +0000 (0:00:03.187) 0:01:03.366 ******* 2026-02-16 04:14:04.586595 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:14:04.586614 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:14:04.586625 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:14:04.586636 | orchestrator | 2026-02-16 04:14:04.586647 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-16 04:14:04.586658 | orchestrator | Monday 16 February 2026 04:13:51 +0000 (0:00:02.923) 0:01:06.290 ******* 2026-02-16 04:14:04.586669 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:14:04.586679 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:14:04.586690 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:14:04.586700 | orchestrator | 2026-02-16 04:14:04.586711 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-16 04:14:04.586722 | orchestrator | Monday 16 February 2026 04:13:54 +0000 (0:00:02.741) 0:01:09.031 ******* 2026-02-16 04:14:04.586733 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:14:04.586743 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:14:04.586754 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:14:04.586765 | orchestrator | 2026-02-16 04:14:04.586776 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-16 04:14:04.586787 | orchestrator | Monday 16 February 2026 04:13:57 +0000 (0:00:02.716) 0:01:11.748 ******* 2026-02-16 04:14:04.586797 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:14:04.586808 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:14:04.586819 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:14:04.586829 | orchestrator | 2026-02-16 04:14:04.586840 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-16 04:14:04.586851 | orchestrator | Monday 16 February 2026 04:13:57 +0000 (0:00:00.410) 0:01:12.159 ******* 2026-02-16 04:14:04.586862 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-16 04:14:04.586874 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:14:04.586885 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-16 04:14:04.586896 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:14:04.586906 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-16 04:14:04.586917 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:14:04.586928 | orchestrator | 2026-02-16 04:14:04.586939 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-16 04:14:04.586950 | orchestrator | Monday 16 February 2026 04:14:00 +0000 (0:00:02.883) 0:01:15.042 ******* 2026-02-16 04:14:04.586960 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:14:04.586971 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:14:04.586982 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:14:04.586992 | orchestrator | 2026-02-16 04:14:04.587004 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-16 04:14:04.587024 | orchestrator | Monday 16 February 2026 04:14:04 +0000 (0:00:03.895) 0:01:18.938 ******* 2026-02-16 04:15:17.032484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-16 04:15:17.032612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-16 04:15:17.032652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-16 04:15:17.032670 | orchestrator | 2026-02-16 04:15:17.032681 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-16 04:15:17.032689 | orchestrator | Monday 16 February 2026 04:14:08 +0000 (0:00:03.577) 0:01:22.516 ******* 2026-02-16 04:15:17.032697 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:15:17.032706 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:15:17.032713 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:15:17.032721 | orchestrator | 2026-02-16 04:15:17.032729 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-16 04:15:17.032737 | orchestrator | Monday 16 February 2026 04:14:08 +0000 (0:00:00.513) 0:01:23.029 ******* 2026-02-16 04:15:17.032745 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:15:17.032752 | orchestrator | 2026-02-16 04:15:17.032760 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-16 04:15:17.032767 | orchestrator | Monday 16 February 2026 04:14:10 +0000 (0:00:02.120) 0:01:25.150 ******* 2026-02-16 04:15:17.032774 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:15:17.032782 | orchestrator | 2026-02-16 04:15:17.032789 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-16 04:15:17.032797 | orchestrator | Monday 16 February 2026 04:14:13 +0000 (0:00:02.308) 0:01:27.458 ******* 2026-02-16 04:15:17.032804 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:15:17.032811 | orchestrator | 2026-02-16 04:15:17.032818 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-16 04:15:17.032826 | orchestrator | Monday 16 February 2026 04:14:15 +0000 (0:00:02.174) 0:01:29.632 ******* 2026-02-16 04:15:17.032834 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:15:17.032842 | orchestrator | 2026-02-16 04:15:17.032850 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-16 04:15:17.032857 | orchestrator | Monday 16 February 2026 04:14:43 +0000 (0:00:28.377) 0:01:58.010 ******* 2026-02-16 04:15:17.032865 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:15:17.032872 | orchestrator | 2026-02-16 04:15:17.032879 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-16 04:15:17.032887 | orchestrator | Monday 16 February 2026 04:14:46 +0000 (0:00:02.412) 0:02:00.423 ******* 2026-02-16 04:15:17.032895 | orchestrator | 2026-02-16 04:15:17.032902 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-16 04:15:17.032910 | orchestrator | Monday 16 February 2026 04:14:46 +0000 (0:00:00.069) 0:02:00.492 ******* 2026-02-16 04:15:17.032918 | orchestrator | 2026-02-16 04:15:17.032926 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-16 04:15:17.032934 | orchestrator | Monday 16 February 2026 04:14:46 +0000 (0:00:00.069) 0:02:00.562 ******* 2026-02-16 04:15:17.032942 | orchestrator | 2026-02-16 04:15:17.032950 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-16 04:15:17.032958 | orchestrator | Monday 16 February 2026 04:14:46 +0000 (0:00:00.069) 0:02:00.632 ******* 2026-02-16 04:15:17.032966 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:15:17.032974 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:15:17.032982 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:15:17.032990 | orchestrator | 2026-02-16 04:15:17.032997 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:15:17.033006 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-16 04:15:17.033016 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-16 04:15:17.033024 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-16 04:15:17.033032 | orchestrator | 2026-02-16 04:15:17.033048 | orchestrator | 2026-02-16 04:15:17.033056 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:15:17.033064 | orchestrator | Monday 16 February 2026 04:15:17 +0000 (0:00:30.745) 0:02:31.377 ******* 2026-02-16 04:15:17.033073 | orchestrator | =============================================================================== 2026-02-16 04:15:17.033081 | orchestrator | glance : Restart glance-api container ---------------------------------- 30.75s 2026-02-16 04:15:17.033090 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.38s 2026-02-16 04:15:17.033097 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.48s 2026-02-16 04:15:17.033113 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.44s 2026-02-16 04:15:17.369860 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.09s 2026-02-16 04:15:17.369951 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.90s 2026-02-16 04:15:17.369964 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.83s 2026-02-16 04:15:17.369973 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.79s 2026-02-16 04:15:17.369982 | orchestrator | glance : Copying over config.json files for services -------------------- 3.75s 2026-02-16 04:15:17.369991 | orchestrator | glance : Check glance containers ---------------------------------------- 3.58s 2026-02-16 04:15:17.370065 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.40s 2026-02-16 04:15:17.370076 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.36s 2026-02-16 04:15:17.370085 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.34s 2026-02-16 04:15:17.370094 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.30s 2026-02-16 04:15:17.370103 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.19s 2026-02-16 04:15:17.370112 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.19s 2026-02-16 04:15:17.370121 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.14s 2026-02-16 04:15:17.370130 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.09s 2026-02-16 04:15:17.370139 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 2.99s 2026-02-16 04:15:17.370148 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 2.92s 2026-02-16 04:15:19.711099 | orchestrator | 2026-02-16 04:15:19 | INFO  | Task 49cd2165-b3e5-408e-84ed-84cd04f452ac (cinder) was prepared for execution. 2026-02-16 04:15:19.711202 | orchestrator | 2026-02-16 04:15:19 | INFO  | It takes a moment until task 49cd2165-b3e5-408e-84ed-84cd04f452ac (cinder) has been started and output is visible here. 2026-02-16 04:15:54.218388 | orchestrator | 2026-02-16 04:15:54.218485 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 04:15:54.218496 | orchestrator | 2026-02-16 04:15:54.218504 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 04:15:54.218512 | orchestrator | Monday 16 February 2026 04:15:23 +0000 (0:00:00.249) 0:00:00.249 ******* 2026-02-16 04:15:54.218520 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:15:54.218529 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:15:54.218536 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:15:54.218544 | orchestrator | 2026-02-16 04:15:54.218551 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 04:15:54.218558 | orchestrator | Monday 16 February 2026 04:15:24 +0000 (0:00:00.295) 0:00:00.545 ******* 2026-02-16 04:15:54.218565 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-16 04:15:54.218572 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-16 04:15:54.218580 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-16 04:15:54.218587 | orchestrator | 2026-02-16 04:15:54.218594 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-16 04:15:54.218621 | orchestrator | 2026-02-16 04:15:54.218629 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-16 04:15:54.218636 | orchestrator | Monday 16 February 2026 04:15:24 +0000 (0:00:00.348) 0:00:00.893 ******* 2026-02-16 04:15:54.218643 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:15:54.218651 | orchestrator | 2026-02-16 04:15:54.218658 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-16 04:15:54.218665 | orchestrator | Monday 16 February 2026 04:15:24 +0000 (0:00:00.422) 0:00:01.316 ******* 2026-02-16 04:15:54.218673 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-16 04:15:54.218680 | orchestrator | 2026-02-16 04:15:54.218687 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-16 04:15:54.218694 | orchestrator | Monday 16 February 2026 04:15:28 +0000 (0:00:03.478) 0:00:04.795 ******* 2026-02-16 04:15:54.218702 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-16 04:15:54.218710 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-16 04:15:54.218717 | orchestrator | 2026-02-16 04:15:54.218724 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-16 04:15:54.218731 | orchestrator | Monday 16 February 2026 04:15:34 +0000 (0:00:06.282) 0:00:11.078 ******* 2026-02-16 04:15:54.218738 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-16 04:15:54.218745 | orchestrator | 2026-02-16 04:15:54.218753 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-16 04:15:54.218760 | orchestrator | Monday 16 February 2026 04:15:37 +0000 (0:00:03.126) 0:00:14.205 ******* 2026-02-16 04:15:54.218767 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-16 04:15:54.218774 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-16 04:15:54.218781 | orchestrator | 2026-02-16 04:15:54.218788 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-16 04:15:54.218794 | orchestrator | Monday 16 February 2026 04:15:41 +0000 (0:00:04.050) 0:00:18.255 ******* 2026-02-16 04:15:54.218802 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-16 04:15:54.218809 | orchestrator | 2026-02-16 04:15:54.218816 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-16 04:15:54.218823 | orchestrator | Monday 16 February 2026 04:15:44 +0000 (0:00:03.147) 0:00:21.403 ******* 2026-02-16 04:15:54.218830 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-16 04:15:54.218837 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-16 04:15:54.218844 | orchestrator | 2026-02-16 04:15:54.218851 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-16 04:15:54.218858 | orchestrator | Monday 16 February 2026 04:15:52 +0000 (0:00:07.452) 0:00:28.856 ******* 2026-02-16 04:15:54.218880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-16 04:15:54.218911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-16 04:15:54.218919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-16 04:15:54.218928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:15:54.218936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:15:54.218947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:15:54.218955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-16 04:15:54.218973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-16 04:15:59.913047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-16 04:15:59.913161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-16 04:15:59.913178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-16 04:15:59.913207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-16 04:15:59.913243 | orchestrator | 2026-02-16 04:15:59.913375 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-16 04:15:59.913393 | orchestrator | Monday 16 February 2026 04:15:54 +0000 (0:00:01.935) 0:00:30.792 ******* 2026-02-16 04:15:59.913405 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:15:59.913417 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:15:59.913428 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:15:59.913438 | orchestrator | 2026-02-16 04:15:59.913450 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-16 04:15:59.913460 | orchestrator | Monday 16 February 2026 04:15:54 +0000 (0:00:00.468) 0:00:31.260 ******* 2026-02-16 04:15:59.913472 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:15:59.913483 | orchestrator | 2026-02-16 04:15:59.913495 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-16 04:15:59.913514 | orchestrator | Monday 16 February 2026 04:15:55 +0000 (0:00:00.527) 0:00:31.787 ******* 2026-02-16 04:15:59.913533 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-16 04:15:59.913550 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-16 04:15:59.913568 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-16 04:15:59.913587 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-16 04:15:59.913600 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-16 04:15:59.913611 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-16 04:15:59.913622 | orchestrator | 2026-02-16 04:15:59.913633 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-16 04:15:59.913644 | orchestrator | Monday 16 February 2026 04:15:56 +0000 (0:00:01.598) 0:00:33.386 ******* 2026-02-16 04:15:59.913677 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-16 04:15:59.913691 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-16 04:15:59.913711 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-16 04:15:59.913735 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-16 04:15:59.913754 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-16 04:16:10.308739 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-16 04:16:10.308880 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-16 04:16:10.308920 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-16 04:16:10.308967 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-16 04:16:10.308980 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-16 04:16:10.309014 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-16 04:16:10.309026 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-16 04:16:10.309038 | orchestrator | 2026-02-16 04:16:10.309052 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-16 04:16:10.309064 | orchestrator | Monday 16 February 2026 04:16:00 +0000 (0:00:03.314) 0:00:36.700 ******* 2026-02-16 04:16:10.309076 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-16 04:16:10.309088 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-16 04:16:10.309107 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-16 04:16:10.309118 | orchestrator | 2026-02-16 04:16:10.309129 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-16 04:16:10.309140 | orchestrator | Monday 16 February 2026 04:16:01 +0000 (0:00:01.460) 0:00:38.161 ******* 2026-02-16 04:16:10.309152 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-16 04:16:10.309163 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-16 04:16:10.309173 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-16 04:16:10.309184 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-16 04:16:10.309203 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-16 04:16:10.309214 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-16 04:16:10.309353 | orchestrator | 2026-02-16 04:16:10.309370 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-16 04:16:10.309383 | orchestrator | Monday 16 February 2026 04:16:04 +0000 (0:00:02.579) 0:00:40.741 ******* 2026-02-16 04:16:10.309395 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-16 04:16:10.309408 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-16 04:16:10.309420 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-16 04:16:10.309432 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-16 04:16:10.309443 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-16 04:16:10.309455 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-16 04:16:10.309467 | orchestrator | 2026-02-16 04:16:10.309479 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-16 04:16:10.309491 | orchestrator | Monday 16 February 2026 04:16:05 +0000 (0:00:01.003) 0:00:41.745 ******* 2026-02-16 04:16:10.309503 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:16:10.309515 | orchestrator | 2026-02-16 04:16:10.309527 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-16 04:16:10.309538 | orchestrator | Monday 16 February 2026 04:16:05 +0000 (0:00:00.130) 0:00:41.875 ******* 2026-02-16 04:16:10.309550 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:16:10.309562 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:16:10.309574 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:16:10.309586 | orchestrator | 2026-02-16 04:16:10.309597 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-16 04:16:10.309609 | orchestrator | Monday 16 February 2026 04:16:05 +0000 (0:00:00.463) 0:00:42.339 ******* 2026-02-16 04:16:10.309621 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:16:10.309634 | orchestrator | 2026-02-16 04:16:10.309645 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-16 04:16:10.309657 | orchestrator | Monday 16 February 2026 04:16:06 +0000 (0:00:00.565) 0:00:42.905 ******* 2026-02-16 04:16:10.309682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-16 04:16:11.217968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-16 04:16:11.218134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-16 04:16:11.218151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:11.218164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:11.218174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:11.218203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:11.218284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:11.218303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:11.218316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:11.218327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:11.218338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:11.218355 | orchestrator | 2026-02-16 04:16:11.218368 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-16 04:16:11.218379 | orchestrator | Monday 16 February 2026 04:16:10 +0000 (0:00:03.984) 0:00:46.889 ******* 2026-02-16 04:16:11.218399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-16 04:16:11.326857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:16:11.327002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 04:16:11.327022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 04:16:11.327035 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:16:11.327049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-16 04:16:11.327083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:16:11.327115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 04:16:11.327133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 04:16:11.327146 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:16:11.327158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-16 04:16:11.327170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:16:11.327189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 04:16:11.327200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 04:16:11.327212 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:16:11.327223 | orchestrator | 2026-02-16 04:16:11.327236 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-16 04:16:11.327291 | orchestrator | Monday 16 February 2026 04:16:11 +0000 (0:00:00.922) 0:00:47.812 ******* 2026-02-16 04:16:11.893927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-16 04:16:11.894081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:16:11.894099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 04:16:11.894132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 04:16:11.894144 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:16:11.894157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-16 04:16:11.894184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:16:11.894202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 04:16:11.894213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 04:16:11.894223 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:16:11.894234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-16 04:16:11.894320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:16:11.894339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 04:16:16.486446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 04:16:16.486563 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:16:16.486580 | orchestrator | 2026-02-16 04:16:16.486608 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-16 04:16:16.486621 | orchestrator | Monday 16 February 2026 04:16:12 +0000 (0:00:00.939) 0:00:48.751 ******* 2026-02-16 04:16:16.486634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-16 04:16:16.486670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-16 04:16:16.486683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-16 04:16:16.486712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:16.486726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:16.486744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:16.486756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:16.486777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:16.486789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:16.486807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:28.759837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:28.759945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:28.759984 | orchestrator | 2026-02-16 04:16:28.759998 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-16 04:16:28.760011 | orchestrator | Monday 16 February 2026 04:16:16 +0000 (0:00:04.305) 0:00:53.057 ******* 2026-02-16 04:16:28.760022 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-16 04:16:28.760034 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-16 04:16:28.760045 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-16 04:16:28.760056 | orchestrator | 2026-02-16 04:16:28.760067 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-16 04:16:28.760077 | orchestrator | Monday 16 February 2026 04:16:18 +0000 (0:00:01.756) 0:00:54.813 ******* 2026-02-16 04:16:28.760090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-16 04:16:28.760103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-16 04:16:28.760140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-16 04:16:28.760154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:28.760174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:28.760185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:28.760197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:28.760209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:28.760301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:31.088591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:31.088719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:31.088737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:31.088750 | orchestrator | 2026-02-16 04:16:31.088764 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-16 04:16:31.088777 | orchestrator | Monday 16 February 2026 04:16:28 +0000 (0:00:10.529) 0:01:05.343 ******* 2026-02-16 04:16:31.088788 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:16:31.088800 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:16:31.088811 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:16:31.088822 | orchestrator | 2026-02-16 04:16:31.088833 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-16 04:16:31.088844 | orchestrator | Monday 16 February 2026 04:16:30 +0000 (0:00:01.474) 0:01:06.818 ******* 2026-02-16 04:16:31.088856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-16 04:16:31.088870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:16:31.088914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 04:16:31.088929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 04:16:31.088941 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:16:31.088952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-16 04:16:31.088964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:16:31.088975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 04:16:31.089017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 04:16:34.518526 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:16:34.518641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-16 04:16:34.518662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:16:34.518677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 04:16:34.518690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 04:16:34.518702 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:16:34.518714 | orchestrator | 2026-02-16 04:16:34.518760 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-16 04:16:34.518774 | orchestrator | Monday 16 February 2026 04:16:31 +0000 (0:00:00.860) 0:01:07.678 ******* 2026-02-16 04:16:34.518785 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:16:34.518796 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:16:34.518806 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:16:34.518817 | orchestrator | 2026-02-16 04:16:34.518828 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-16 04:16:34.518839 | orchestrator | Monday 16 February 2026 04:16:31 +0000 (0:00:00.524) 0:01:08.203 ******* 2026-02-16 04:16:34.518882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-16 04:16:34.518897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-16 04:16:34.518908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-16 04:16:34.518920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:34.518940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:34.518956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:16:34.518978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-16 04:18:17.845372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-16 04:18:17.845495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-16 04:18:17.845514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-16 04:18:17.845559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-16 04:18:17.845579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-16 04:18:17.845600 | orchestrator | 2026-02-16 04:18:17.845632 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-16 04:18:17.845655 | orchestrator | Monday 16 February 2026 04:16:34 +0000 (0:00:02.894) 0:01:11.098 ******* 2026-02-16 04:18:17.845673 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:18:17.845692 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:18:17.845709 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:18:17.845726 | orchestrator | 2026-02-16 04:18:17.845744 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-16 04:18:17.845846 | orchestrator | Monday 16 February 2026 04:16:34 +0000 (0:00:00.314) 0:01:11.413 ******* 2026-02-16 04:18:17.845872 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:18:17.845891 | orchestrator | 2026-02-16 04:18:17.845937 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-16 04:18:17.845960 | orchestrator | Monday 16 February 2026 04:16:37 +0000 (0:00:02.133) 0:01:13.546 ******* 2026-02-16 04:18:17.845980 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:18:17.846168 | orchestrator | 2026-02-16 04:18:17.846191 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-16 04:18:17.846203 | orchestrator | Monday 16 February 2026 04:16:39 +0000 (0:00:02.183) 0:01:15.730 ******* 2026-02-16 04:18:17.846214 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:18:17.846224 | orchestrator | 2026-02-16 04:18:17.846235 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-16 04:18:17.846246 | orchestrator | Monday 16 February 2026 04:16:58 +0000 (0:00:19.393) 0:01:35.124 ******* 2026-02-16 04:18:17.846257 | orchestrator | 2026-02-16 04:18:17.846267 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-16 04:18:17.846278 | orchestrator | Monday 16 February 2026 04:16:58 +0000 (0:00:00.069) 0:01:35.193 ******* 2026-02-16 04:18:17.846289 | orchestrator | 2026-02-16 04:18:17.846299 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-16 04:18:17.846310 | orchestrator | Monday 16 February 2026 04:16:58 +0000 (0:00:00.069) 0:01:35.263 ******* 2026-02-16 04:18:17.846321 | orchestrator | 2026-02-16 04:18:17.846332 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-16 04:18:17.846342 | orchestrator | Monday 16 February 2026 04:16:58 +0000 (0:00:00.071) 0:01:35.335 ******* 2026-02-16 04:18:17.846353 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:18:17.846380 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:18:17.846391 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:18:17.846402 | orchestrator | 2026-02-16 04:18:17.846412 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-16 04:18:17.846423 | orchestrator | Monday 16 February 2026 04:17:30 +0000 (0:00:31.318) 0:02:06.653 ******* 2026-02-16 04:18:17.846433 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:18:17.846444 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:18:17.846455 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:18:17.846465 | orchestrator | 2026-02-16 04:18:17.846476 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-16 04:18:17.846487 | orchestrator | Monday 16 February 2026 04:17:40 +0000 (0:00:10.193) 0:02:16.847 ******* 2026-02-16 04:18:17.846497 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:18:17.846508 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:18:17.846518 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:18:17.846529 | orchestrator | 2026-02-16 04:18:17.846539 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-16 04:18:17.846550 | orchestrator | Monday 16 February 2026 04:18:06 +0000 (0:00:26.184) 0:02:43.031 ******* 2026-02-16 04:18:17.846561 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:18:17.846572 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:18:17.846582 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:18:17.846593 | orchestrator | 2026-02-16 04:18:17.846604 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-16 04:18:17.846616 | orchestrator | Monday 16 February 2026 04:18:17 +0000 (0:00:11.004) 0:02:54.036 ******* 2026-02-16 04:18:17.846627 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:18:17.846637 | orchestrator | 2026-02-16 04:18:17.846648 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:18:17.846660 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-16 04:18:17.846673 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-16 04:18:17.846683 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-16 04:18:17.846694 | orchestrator | 2026-02-16 04:18:17.846705 | orchestrator | 2026-02-16 04:18:17.846716 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:18:17.846727 | orchestrator | Monday 16 February 2026 04:18:17 +0000 (0:00:00.278) 0:02:54.314 ******* 2026-02-16 04:18:17.846738 | orchestrator | =============================================================================== 2026-02-16 04:18:17.846748 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 31.32s 2026-02-16 04:18:17.846759 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 26.18s 2026-02-16 04:18:17.846771 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.39s 2026-02-16 04:18:17.846800 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.00s 2026-02-16 04:18:17.846820 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.53s 2026-02-16 04:18:17.846838 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.19s 2026-02-16 04:18:17.846858 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.45s 2026-02-16 04:18:17.846878 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.28s 2026-02-16 04:18:17.846897 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.31s 2026-02-16 04:18:17.846916 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.05s 2026-02-16 04:18:17.846936 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.98s 2026-02-16 04:18:17.846969 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.48s 2026-02-16 04:18:17.846988 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.31s 2026-02-16 04:18:17.847008 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.15s 2026-02-16 04:18:17.847043 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.13s 2026-02-16 04:18:18.196194 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.89s 2026-02-16 04:18:18.196338 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.58s 2026-02-16 04:18:18.196364 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.18s 2026-02-16 04:18:18.196382 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.13s 2026-02-16 04:18:18.196394 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 1.94s 2026-02-16 04:18:20.449711 | orchestrator | 2026-02-16 04:18:20 | INFO  | Task 18c0aa3a-b608-4428-bb79-0f1a81675cf2 (barbican) was prepared for execution. 2026-02-16 04:18:20.449800 | orchestrator | 2026-02-16 04:18:20 | INFO  | It takes a moment until task 18c0aa3a-b608-4428-bb79-0f1a81675cf2 (barbican) has been started and output is visible here. 2026-02-16 04:19:03.726905 | orchestrator | 2026-02-16 04:19:03.727091 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 04:19:03.727109 | orchestrator | 2026-02-16 04:19:03.727120 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 04:19:03.727132 | orchestrator | Monday 16 February 2026 04:18:24 +0000 (0:00:00.253) 0:00:00.253 ******* 2026-02-16 04:19:03.727142 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:19:03.727153 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:19:03.727163 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:19:03.727174 | orchestrator | 2026-02-16 04:19:03.727184 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 04:19:03.727195 | orchestrator | Monday 16 February 2026 04:18:24 +0000 (0:00:00.301) 0:00:00.554 ******* 2026-02-16 04:19:03.727205 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-16 04:19:03.727215 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-16 04:19:03.727225 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-16 04:19:03.727234 | orchestrator | 2026-02-16 04:19:03.727244 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-16 04:19:03.727254 | orchestrator | 2026-02-16 04:19:03.727264 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-16 04:19:03.727274 | orchestrator | Monday 16 February 2026 04:18:25 +0000 (0:00:00.413) 0:00:00.968 ******* 2026-02-16 04:19:03.727284 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:19:03.727295 | orchestrator | 2026-02-16 04:19:03.727305 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-16 04:19:03.727315 | orchestrator | Monday 16 February 2026 04:18:25 +0000 (0:00:00.528) 0:00:01.496 ******* 2026-02-16 04:19:03.727325 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-16 04:19:03.727335 | orchestrator | 2026-02-16 04:19:03.727345 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-16 04:19:03.727355 | orchestrator | Monday 16 February 2026 04:18:29 +0000 (0:00:03.347) 0:00:04.844 ******* 2026-02-16 04:19:03.727365 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-16 04:19:03.727375 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-16 04:19:03.727385 | orchestrator | 2026-02-16 04:19:03.727395 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-16 04:19:03.727406 | orchestrator | Monday 16 February 2026 04:18:35 +0000 (0:00:06.426) 0:00:11.271 ******* 2026-02-16 04:19:03.727460 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-16 04:19:03.727483 | orchestrator | 2026-02-16 04:19:03.727500 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-16 04:19:03.727516 | orchestrator | Monday 16 February 2026 04:18:38 +0000 (0:00:03.242) 0:00:14.514 ******* 2026-02-16 04:19:03.727532 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-16 04:19:03.727548 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-16 04:19:03.727565 | orchestrator | 2026-02-16 04:19:03.727582 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-16 04:19:03.727598 | orchestrator | Monday 16 February 2026 04:18:42 +0000 (0:00:04.080) 0:00:18.594 ******* 2026-02-16 04:19:03.727614 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-16 04:19:03.727629 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-16 04:19:03.727663 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-16 04:19:03.727678 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-16 04:19:03.727693 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-16 04:19:03.727708 | orchestrator | 2026-02-16 04:19:03.727724 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-16 04:19:03.727741 | orchestrator | Monday 16 February 2026 04:18:58 +0000 (0:00:15.398) 0:00:33.993 ******* 2026-02-16 04:19:03.727757 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-16 04:19:03.727771 | orchestrator | 2026-02-16 04:19:03.727787 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-16 04:19:03.727802 | orchestrator | Monday 16 February 2026 04:19:02 +0000 (0:00:03.831) 0:00:37.824 ******* 2026-02-16 04:19:03.727822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-16 04:19:03.727866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-16 04:19:03.727883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-16 04:19:03.727914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:03.727941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:03.727959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:03.727984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:09.351767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:09.351882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:09.351919 | orchestrator | 2026-02-16 04:19:09.351930 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-16 04:19:09.351941 | orchestrator | Monday 16 February 2026 04:19:03 +0000 (0:00:01.605) 0:00:39.430 ******* 2026-02-16 04:19:09.351951 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-16 04:19:09.351959 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-16 04:19:09.351968 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-16 04:19:09.351976 | orchestrator | 2026-02-16 04:19:09.351985 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-16 04:19:09.352038 | orchestrator | Monday 16 February 2026 04:19:04 +0000 (0:00:01.204) 0:00:40.635 ******* 2026-02-16 04:19:09.352050 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:19:09.352059 | orchestrator | 2026-02-16 04:19:09.352068 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-16 04:19:09.352077 | orchestrator | Monday 16 February 2026 04:19:05 +0000 (0:00:00.329) 0:00:40.964 ******* 2026-02-16 04:19:09.352085 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:19:09.352094 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:19:09.352102 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:19:09.352111 | orchestrator | 2026-02-16 04:19:09.352119 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-16 04:19:09.352128 | orchestrator | Monday 16 February 2026 04:19:05 +0000 (0:00:00.284) 0:00:41.249 ******* 2026-02-16 04:19:09.352149 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:19:09.352159 | orchestrator | 2026-02-16 04:19:09.352168 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-16 04:19:09.352176 | orchestrator | Monday 16 February 2026 04:19:06 +0000 (0:00:00.542) 0:00:41.791 ******* 2026-02-16 04:19:09.352187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-16 04:19:09.352215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-16 04:19:09.352234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-16 04:19:09.352245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:09.352261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:09.352270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:09.352279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:09.352296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:10.565082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:10.565195 | orchestrator | 2026-02-16 04:19:10.565210 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-16 04:19:10.565222 | orchestrator | Monday 16 February 2026 04:19:09 +0000 (0:00:03.268) 0:00:45.060 ******* 2026-02-16 04:19:10.565235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-16 04:19:10.565262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 04:19:10.565274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:19:10.565285 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:19:10.565296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-16 04:19:10.565342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 04:19:10.565354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:19:10.565364 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:19:10.565379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-16 04:19:10.565390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 04:19:10.565400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:19:10.565417 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:19:10.565427 | orchestrator | 2026-02-16 04:19:10.565438 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-16 04:19:10.565448 | orchestrator | Monday 16 February 2026 04:19:09 +0000 (0:00:00.507) 0:00:45.568 ******* 2026-02-16 04:19:10.565465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-16 04:19:13.815770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 04:19:13.815880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:19:13.815899 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:19:13.815931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-16 04:19:13.815945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 04:19:13.815981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:19:13.816062 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:19:13.816094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-16 04:19:13.816107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 04:19:13.816124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:19:13.816136 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:19:13.816147 | orchestrator | 2026-02-16 04:19:13.816159 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-16 04:19:13.816172 | orchestrator | Monday 16 February 2026 04:19:10 +0000 (0:00:00.710) 0:00:46.278 ******* 2026-02-16 04:19:13.816183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-16 04:19:13.816204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-16 04:19:13.816239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-16 04:19:22.542436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:22.542587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:22.542629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:22.542644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:22.542657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:22.542669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:22.542681 | orchestrator | 2026-02-16 04:19:22.542695 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-16 04:19:22.542708 | orchestrator | Monday 16 February 2026 04:19:13 +0000 (0:00:03.248) 0:00:49.526 ******* 2026-02-16 04:19:22.542719 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:19:22.542732 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:19:22.542743 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:19:22.542754 | orchestrator | 2026-02-16 04:19:22.542783 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-16 04:19:22.542795 | orchestrator | Monday 16 February 2026 04:19:15 +0000 (0:00:01.398) 0:00:50.925 ******* 2026-02-16 04:19:22.542807 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 04:19:22.542819 | orchestrator | 2026-02-16 04:19:22.542842 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-16 04:19:22.542853 | orchestrator | Monday 16 February 2026 04:19:16 +0000 (0:00:00.826) 0:00:51.751 ******* 2026-02-16 04:19:22.542864 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:19:22.542875 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:19:22.542885 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:19:22.542896 | orchestrator | 2026-02-16 04:19:22.542907 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-16 04:19:22.542918 | orchestrator | Monday 16 February 2026 04:19:16 +0000 (0:00:00.479) 0:00:52.231 ******* 2026-02-16 04:19:22.542945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-16 04:19:22.542959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-16 04:19:22.542974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-16 04:19:22.543016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:23.218636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:23.218783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:23.218802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:23.218817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:23.218829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:23.218841 | orchestrator | 2026-02-16 04:19:23.218854 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-16 04:19:23.218867 | orchestrator | Monday 16 February 2026 04:19:22 +0000 (0:00:06.024) 0:00:58.255 ******* 2026-02-16 04:19:23.218897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-16 04:19:23.218916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 04:19:23.218937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:19:23.218949 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:19:23.218961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-16 04:19:23.218973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 04:19:23.219031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:19:23.219053 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:19:23.219074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-16 04:19:25.473798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 04:19:25.473883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:19:25.473894 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:19:25.473904 | orchestrator | 2026-02-16 04:19:25.473912 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-16 04:19:25.473920 | orchestrator | Monday 16 February 2026 04:19:23 +0000 (0:00:00.672) 0:00:58.928 ******* 2026-02-16 04:19:25.473927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-16 04:19:25.473939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-16 04:19:25.474123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-16 04:19:25.474140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:25.474148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:25.474155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:25.474162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:25.474169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:25.474183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:19:25.474190 | orchestrator | 2026-02-16 04:19:25.474197 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-16 04:19:25.474208 | orchestrator | Monday 16 February 2026 04:19:25 +0000 (0:00:02.250) 0:01:01.178 ******* 2026-02-16 04:20:06.660913 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:20:06.661059 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:20:06.661073 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:20:06.661083 | orchestrator | 2026-02-16 04:20:06.661114 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-16 04:20:06.661125 | orchestrator | Monday 16 February 2026 04:19:25 +0000 (0:00:00.291) 0:01:01.469 ******* 2026-02-16 04:20:06.661134 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:20:06.661143 | orchestrator | 2026-02-16 04:20:06.661155 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-16 04:20:06.661170 | orchestrator | Monday 16 February 2026 04:19:27 +0000 (0:00:02.113) 0:01:03.582 ******* 2026-02-16 04:20:06.661186 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:20:06.661200 | orchestrator | 2026-02-16 04:20:06.661215 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-16 04:20:06.661243 | orchestrator | Monday 16 February 2026 04:19:30 +0000 (0:00:02.167) 0:01:05.750 ******* 2026-02-16 04:20:06.661256 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:20:06.661270 | orchestrator | 2026-02-16 04:20:06.661285 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-16 04:20:06.661300 | orchestrator | Monday 16 February 2026 04:19:42 +0000 (0:00:12.031) 0:01:17.782 ******* 2026-02-16 04:20:06.661314 | orchestrator | 2026-02-16 04:20:06.661328 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-16 04:20:06.661343 | orchestrator | Monday 16 February 2026 04:19:42 +0000 (0:00:00.067) 0:01:17.849 ******* 2026-02-16 04:20:06.661359 | orchestrator | 2026-02-16 04:20:06.661371 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-16 04:20:06.661380 | orchestrator | Monday 16 February 2026 04:19:42 +0000 (0:00:00.081) 0:01:17.930 ******* 2026-02-16 04:20:06.661389 | orchestrator | 2026-02-16 04:20:06.661398 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-16 04:20:06.661406 | orchestrator | Monday 16 February 2026 04:19:42 +0000 (0:00:00.072) 0:01:18.003 ******* 2026-02-16 04:20:06.661415 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:20:06.661424 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:20:06.661433 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:20:06.661441 | orchestrator | 2026-02-16 04:20:06.661450 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-16 04:20:06.661459 | orchestrator | Monday 16 February 2026 04:19:48 +0000 (0:00:06.251) 0:01:24.255 ******* 2026-02-16 04:20:06.661467 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:20:06.661476 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:20:06.661485 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:20:06.661494 | orchestrator | 2026-02-16 04:20:06.661503 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-16 04:20:06.661532 | orchestrator | Monday 16 February 2026 04:19:58 +0000 (0:00:09.652) 0:01:33.907 ******* 2026-02-16 04:20:06.661542 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:20:06.661550 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:20:06.661559 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:20:06.661567 | orchestrator | 2026-02-16 04:20:06.661576 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:20:06.661586 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-16 04:20:06.661597 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 04:20:06.661605 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 04:20:06.661614 | orchestrator | 2026-02-16 04:20:06.661623 | orchestrator | 2026-02-16 04:20:06.661632 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:20:06.661640 | orchestrator | Monday 16 February 2026 04:20:06 +0000 (0:00:08.136) 0:01:42.044 ******* 2026-02-16 04:20:06.661649 | orchestrator | =============================================================================== 2026-02-16 04:20:06.661657 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.40s 2026-02-16 04:20:06.661666 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.03s 2026-02-16 04:20:06.661674 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.65s 2026-02-16 04:20:06.661683 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.14s 2026-02-16 04:20:06.661691 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.43s 2026-02-16 04:20:06.661700 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.25s 2026-02-16 04:20:06.661709 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.02s 2026-02-16 04:20:06.661724 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.08s 2026-02-16 04:20:06.661737 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.83s 2026-02-16 04:20:06.661750 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.35s 2026-02-16 04:20:06.661762 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.27s 2026-02-16 04:20:06.661775 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.25s 2026-02-16 04:20:06.661788 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.24s 2026-02-16 04:20:06.661800 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.25s 2026-02-16 04:20:06.661815 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.17s 2026-02-16 04:20:06.661849 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.11s 2026-02-16 04:20:06.661864 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.61s 2026-02-16 04:20:06.661888 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.40s 2026-02-16 04:20:06.661903 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.20s 2026-02-16 04:20:06.661918 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 0.83s 2026-02-16 04:20:08.960549 | orchestrator | 2026-02-16 04:20:08 | INFO  | Task bb11f643-88e6-4e15-8f5f-d7b975c95ce1 (designate) was prepared for execution. 2026-02-16 04:20:08.960629 | orchestrator | 2026-02-16 04:20:08 | INFO  | It takes a moment until task bb11f643-88e6-4e15-8f5f-d7b975c95ce1 (designate) has been started and output is visible here. 2026-02-16 04:20:40.779971 | orchestrator | 2026-02-16 04:20:40.780121 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 04:20:40.780167 | orchestrator | 2026-02-16 04:20:40.780181 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 04:20:40.780193 | orchestrator | Monday 16 February 2026 04:20:13 +0000 (0:00:00.272) 0:00:00.272 ******* 2026-02-16 04:20:40.780204 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:20:40.780216 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:20:40.780227 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:20:40.780238 | orchestrator | 2026-02-16 04:20:40.780249 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 04:20:40.780260 | orchestrator | Monday 16 February 2026 04:20:13 +0000 (0:00:00.308) 0:00:00.580 ******* 2026-02-16 04:20:40.780271 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-16 04:20:40.780283 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-16 04:20:40.780294 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-16 04:20:40.780305 | orchestrator | 2026-02-16 04:20:40.780315 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-16 04:20:40.780326 | orchestrator | 2026-02-16 04:20:40.780337 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-16 04:20:40.780347 | orchestrator | Monday 16 February 2026 04:20:13 +0000 (0:00:00.412) 0:00:00.993 ******* 2026-02-16 04:20:40.780359 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:20:40.780370 | orchestrator | 2026-02-16 04:20:40.780381 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-16 04:20:40.780392 | orchestrator | Monday 16 February 2026 04:20:14 +0000 (0:00:00.553) 0:00:01.546 ******* 2026-02-16 04:20:40.780403 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-16 04:20:40.780413 | orchestrator | 2026-02-16 04:20:40.780424 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-16 04:20:40.780435 | orchestrator | Monday 16 February 2026 04:20:17 +0000 (0:00:03.476) 0:00:05.022 ******* 2026-02-16 04:20:40.780445 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-16 04:20:40.780456 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-16 04:20:40.780467 | orchestrator | 2026-02-16 04:20:40.780478 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-16 04:20:40.780488 | orchestrator | Monday 16 February 2026 04:20:24 +0000 (0:00:06.416) 0:00:11.439 ******* 2026-02-16 04:20:40.780499 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-16 04:20:40.780510 | orchestrator | 2026-02-16 04:20:40.780521 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-16 04:20:40.780532 | orchestrator | Monday 16 February 2026 04:20:27 +0000 (0:00:03.292) 0:00:14.731 ******* 2026-02-16 04:20:40.780542 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-16 04:20:40.780553 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-16 04:20:40.780564 | orchestrator | 2026-02-16 04:20:40.780575 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-16 04:20:40.780585 | orchestrator | Monday 16 February 2026 04:20:31 +0000 (0:00:04.121) 0:00:18.853 ******* 2026-02-16 04:20:40.780596 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-16 04:20:40.780607 | orchestrator | 2026-02-16 04:20:40.780618 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-16 04:20:40.780629 | orchestrator | Monday 16 February 2026 04:20:34 +0000 (0:00:03.152) 0:00:22.006 ******* 2026-02-16 04:20:40.780640 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-16 04:20:40.780650 | orchestrator | 2026-02-16 04:20:40.780661 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-16 04:20:40.780672 | orchestrator | Monday 16 February 2026 04:20:38 +0000 (0:00:03.898) 0:00:25.905 ******* 2026-02-16 04:20:40.780708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-16 04:20:40.780746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-16 04:20:40.780759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-16 04:20:40.780772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:20:40.780785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:20:40.780804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:20:40.780821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:40.780842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:46.808866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:46.808993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:46.809011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:46.809024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:46.809060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:46.809087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:46.809191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:46.809204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:46.809216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:46.809227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:46.809248 | orchestrator | 2026-02-16 04:20:46.809262 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-16 04:20:46.809274 | orchestrator | Monday 16 February 2026 04:20:41 +0000 (0:00:02.803) 0:00:28.709 ******* 2026-02-16 04:20:46.809285 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:20:46.809297 | orchestrator | 2026-02-16 04:20:46.809308 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-16 04:20:46.809319 | orchestrator | Monday 16 February 2026 04:20:41 +0000 (0:00:00.132) 0:00:28.841 ******* 2026-02-16 04:20:46.809330 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:20:46.809341 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:20:46.809352 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:20:46.809363 | orchestrator | 2026-02-16 04:20:46.809374 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-16 04:20:46.809385 | orchestrator | Monday 16 February 2026 04:20:42 +0000 (0:00:00.501) 0:00:29.342 ******* 2026-02-16 04:20:46.809397 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:20:46.809410 | orchestrator | 2026-02-16 04:20:46.809422 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-16 04:20:46.809434 | orchestrator | Monday 16 February 2026 04:20:42 +0000 (0:00:00.560) 0:00:29.903 ******* 2026-02-16 04:20:46.809454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-16 04:20:46.809489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-16 04:20:48.691917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-16 04:20:48.692085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:20:48.692205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:20:48.692247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:20:48.692269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:48.692314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:48.692336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:48.692371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:48.692393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:48.692412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:48.692440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:48.692461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:48.692497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:49.543212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:49.543341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:49.543361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:49.543372 | orchestrator | 2026-02-16 04:20:49.543385 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-16 04:20:49.543398 | orchestrator | Monday 16 February 2026 04:20:48 +0000 (0:00:05.948) 0:00:35.851 ******* 2026-02-16 04:20:49.543426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-16 04:20:49.543439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 04:20:49.543470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 04:20:49.543493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 04:20:49.543505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 04:20:49.543515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:20:49.543525 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:20:49.543542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-16 04:20:49.543552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 04:20:49.543591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 04:20:49.543619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 04:20:50.305701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 04:20:50.305804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:20:50.305821 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:20:50.305845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-16 04:20:50.305854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 04:20:50.305862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 04:20:50.305886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 04:20:50.305906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 04:20:50.305913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:20:50.305919 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:20:50.305925 | orchestrator | 2026-02-16 04:20:50.305932 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-16 04:20:50.305940 | orchestrator | Monday 16 February 2026 04:20:49 +0000 (0:00:01.004) 0:00:36.856 ******* 2026-02-16 04:20:50.305950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-16 04:20:50.305957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 04:20:50.305967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 04:20:50.305978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 04:20:50.612668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 04:20:50.612756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:20:50.612770 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:20:50.612797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-16 04:20:50.612809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 04:20:50.612836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 04:20:50.612845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 04:20:50.612868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 04:20:50.612878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:20:50.612886 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:20:50.612899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-16 04:20:50.612908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 04:20:50.612922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 04:20:50.612930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 04:20:50.612943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 04:20:54.890505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:20:54.890620 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:20:54.890639 | orchestrator | 2026-02-16 04:20:54.890652 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-16 04:20:54.890666 | orchestrator | Monday 16 February 2026 04:20:50 +0000 (0:00:00.915) 0:00:37.771 ******* 2026-02-16 04:20:54.890694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-16 04:20:54.890731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-16 04:20:54.890745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-16 04:20:54.890774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:20:54.890789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:20:54.890806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:20:54.890822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:54.890855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:54.890876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:54.890897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-16 04:20:54.890930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:06.181282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:06.181428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:06.181492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:06.181512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:06.181532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:06.181551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:06.181593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:06.181615 | orchestrator | 2026-02-16 04:21:06.181637 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-16 04:21:06.181656 | orchestrator | Monday 16 February 2026 04:20:56 +0000 (0:00:06.074) 0:00:43.845 ******* 2026-02-16 04:21:06.181687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-16 04:21:06.181719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-16 04:21:06.181731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-16 04:21:06.181745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:21:06.181826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:21:14.050119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:21:14.050330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:14.050356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:14.050373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:14.050390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:14.050409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:14.050443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:14.050490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:14.050506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:14.050521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:14.050562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:14.050595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:14.050615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:14.050644 | orchestrator | 2026-02-16 04:21:14.050666 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-16 04:21:14.050688 | orchestrator | Monday 16 February 2026 04:21:10 +0000 (0:00:13.804) 0:00:57.650 ******* 2026-02-16 04:21:14.050751 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-16 04:21:18.199883 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-16 04:21:18.200007 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-16 04:21:18.200032 | orchestrator | 2026-02-16 04:21:18.200052 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-16 04:21:18.200071 | orchestrator | Monday 16 February 2026 04:21:14 +0000 (0:00:03.556) 0:01:01.206 ******* 2026-02-16 04:21:18.200088 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-16 04:21:18.200106 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-16 04:21:18.200123 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-16 04:21:18.200137 | orchestrator | 2026-02-16 04:21:18.200147 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-16 04:21:18.200173 | orchestrator | Monday 16 February 2026 04:21:16 +0000 (0:00:02.377) 0:01:03.584 ******* 2026-02-16 04:21:18.200252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-16 04:21:18.200271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-16 04:21:18.200282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-16 04:21:18.200311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:21:18.200349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 04:21:18.200367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 04:21:18.200379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 04:21:18.200390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:21:18.200400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 04:21:18.200410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 04:21:18.200435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 04:21:20.938417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:21:20.938509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 04:21:20.938521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 04:21:20.938530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 04:21:20.938538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:20.938563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:20.938585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:20.938594 | orchestrator | 2026-02-16 04:21:20.938603 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-16 04:21:20.938612 | orchestrator | Monday 16 February 2026 04:21:19 +0000 (0:00:02.815) 0:01:06.400 ******* 2026-02-16 04:21:20.938626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-16 04:21:20.938636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-16 04:21:20.938644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-16 04:21:20.938657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:21:20.938669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 04:21:21.938322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 04:21:21.938428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 04:21:21.938446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:21:21.938460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 04:21:21.938494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 04:21:21.938506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 04:21:21.938544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:21:21.938557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 04:21:21.938569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 04:21:21.938580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 04:21:21.938600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:21.938614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:21.938625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:21.938638 | orchestrator | 2026-02-16 04:21:21.938651 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-16 04:21:21.938671 | orchestrator | Monday 16 February 2026 04:21:21 +0000 (0:00:02.692) 0:01:09.092 ******* 2026-02-16 04:21:22.849760 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:21:22.849841 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:21:22.849851 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:21:22.849859 | orchestrator | 2026-02-16 04:21:22.849868 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-16 04:21:22.849890 | orchestrator | Monday 16 February 2026 04:21:22 +0000 (0:00:00.283) 0:01:09.377 ******* 2026-02-16 04:21:22.849901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-16 04:21:22.849914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 04:21:22.849941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 04:21:22.849950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 04:21:22.849959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 04:21:22.849981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:21:22.849989 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:21:22.850001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-16 04:21:22.850009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 04:21:22.850067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 04:21:22.850075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 04:21:22.850083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 04:21:22.850095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:21:26.106313 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:21:26.106422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-16 04:21:26.106438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 04:21:26.106468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 04:21:26.106479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 04:21:26.106488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 04:21:26.106499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:21:26.106513 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:21:26.106527 | orchestrator | 2026-02-16 04:21:26.106559 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-16 04:21:26.106574 | orchestrator | Monday 16 February 2026 04:21:22 +0000 (0:00:00.735) 0:01:10.112 ******* 2026-02-16 04:21:26.106597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-16 04:21:26.106623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-16 04:21:26.106639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-16 04:21:26.106649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:21:26.106664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:21:27.816593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-16 04:21:27.816701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:27.816742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:27.816756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:27.816768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:27.816781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:27.816817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:27.816830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:27.816849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:27.816861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:27.816873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:27.816884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:27.816896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:21:27.816908 | orchestrator | 2026-02-16 04:21:27.816921 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-16 04:21:27.816934 | orchestrator | Monday 16 February 2026 04:21:27 +0000 (0:00:04.359) 0:01:14.471 ******* 2026-02-16 04:21:27.816945 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:21:27.816963 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:22:50.741473 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:22:50.741602 | orchestrator | 2026-02-16 04:22:50.741623 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-16 04:22:50.741768 | orchestrator | Monday 16 February 2026 04:21:27 +0000 (0:00:00.506) 0:01:14.978 ******* 2026-02-16 04:22:50.741803 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-16 04:22:50.741819 | orchestrator | 2026-02-16 04:22:50.741846 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-16 04:22:50.741859 | orchestrator | Monday 16 February 2026 04:21:30 +0000 (0:00:02.191) 0:01:17.170 ******* 2026-02-16 04:22:50.741873 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-16 04:22:50.741887 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-16 04:22:50.741901 | orchestrator | 2026-02-16 04:22:50.741914 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-16 04:22:50.741927 | orchestrator | Monday 16 February 2026 04:21:32 +0000 (0:00:02.261) 0:01:19.431 ******* 2026-02-16 04:22:50.741940 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:22:50.741957 | orchestrator | 2026-02-16 04:22:50.741973 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-16 04:22:50.741990 | orchestrator | Monday 16 February 2026 04:21:48 +0000 (0:00:16.082) 0:01:35.514 ******* 2026-02-16 04:22:50.742005 | orchestrator | 2026-02-16 04:22:50.742074 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-16 04:22:50.742092 | orchestrator | Monday 16 February 2026 04:21:48 +0000 (0:00:00.070) 0:01:35.584 ******* 2026-02-16 04:22:50.742107 | orchestrator | 2026-02-16 04:22:50.742126 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-16 04:22:50.742213 | orchestrator | Monday 16 February 2026 04:21:48 +0000 (0:00:00.072) 0:01:35.657 ******* 2026-02-16 04:22:50.742228 | orchestrator | 2026-02-16 04:22:50.742242 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-16 04:22:50.742260 | orchestrator | Monday 16 February 2026 04:21:48 +0000 (0:00:00.086) 0:01:35.744 ******* 2026-02-16 04:22:50.742274 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:22:50.742288 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:22:50.742301 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:22:50.742315 | orchestrator | 2026-02-16 04:22:50.742329 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-16 04:22:50.742343 | orchestrator | Monday 16 February 2026 04:21:57 +0000 (0:00:08.707) 0:01:44.451 ******* 2026-02-16 04:22:50.742357 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:22:50.742371 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:22:50.742385 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:22:50.742399 | orchestrator | 2026-02-16 04:22:50.742413 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-16 04:22:50.742465 | orchestrator | Monday 16 February 2026 04:22:07 +0000 (0:00:10.451) 0:01:54.903 ******* 2026-02-16 04:22:50.742480 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:22:50.742493 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:22:50.742508 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:22:50.742521 | orchestrator | 2026-02-16 04:22:50.742535 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-16 04:22:50.742549 | orchestrator | Monday 16 February 2026 04:22:18 +0000 (0:00:10.650) 0:02:05.553 ******* 2026-02-16 04:22:50.742562 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:22:50.742575 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:22:50.742590 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:22:50.742603 | orchestrator | 2026-02-16 04:22:50.742617 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-16 04:22:50.742631 | orchestrator | Monday 16 February 2026 04:22:23 +0000 (0:00:05.565) 0:02:11.118 ******* 2026-02-16 04:22:50.742645 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:22:50.742659 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:22:50.742673 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:22:50.742686 | orchestrator | 2026-02-16 04:22:50.742701 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-16 04:22:50.742732 | orchestrator | Monday 16 February 2026 04:22:34 +0000 (0:00:10.587) 0:02:21.706 ******* 2026-02-16 04:22:50.742747 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:22:50.742761 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:22:50.742775 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:22:50.742789 | orchestrator | 2026-02-16 04:22:50.742803 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-16 04:22:50.742818 | orchestrator | Monday 16 February 2026 04:22:43 +0000 (0:00:08.538) 0:02:30.244 ******* 2026-02-16 04:22:50.742832 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:22:50.742846 | orchestrator | 2026-02-16 04:22:50.742861 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:22:50.742877 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-16 04:22:50.742894 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 04:22:50.742908 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 04:22:50.742923 | orchestrator | 2026-02-16 04:22:50.742938 | orchestrator | 2026-02-16 04:22:50.742952 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:22:50.742966 | orchestrator | Monday 16 February 2026 04:22:50 +0000 (0:00:07.253) 0:02:37.498 ******* 2026-02-16 04:22:50.742981 | orchestrator | =============================================================================== 2026-02-16 04:22:50.742996 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.08s 2026-02-16 04:22:50.743011 | orchestrator | designate : Copying over designate.conf -------------------------------- 13.80s 2026-02-16 04:22:50.743048 | orchestrator | designate : Restart designate-central container ------------------------ 10.65s 2026-02-16 04:22:50.743063 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.59s 2026-02-16 04:22:50.743086 | orchestrator | designate : Restart designate-api container ---------------------------- 10.45s 2026-02-16 04:22:50.743100 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.71s 2026-02-16 04:22:50.743113 | orchestrator | designate : Restart designate-worker container -------------------------- 8.54s 2026-02-16 04:22:50.743125 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.25s 2026-02-16 04:22:50.743138 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.42s 2026-02-16 04:22:50.743151 | orchestrator | designate : Copying over config.json files for services ----------------- 6.07s 2026-02-16 04:22:50.743163 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.95s 2026-02-16 04:22:50.743177 | orchestrator | designate : Restart designate-producer container ------------------------ 5.57s 2026-02-16 04:22:50.743190 | orchestrator | designate : Check designate containers ---------------------------------- 4.36s 2026-02-16 04:22:50.743204 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.12s 2026-02-16 04:22:50.743218 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.90s 2026-02-16 04:22:50.743231 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.56s 2026-02-16 04:22:50.743244 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.48s 2026-02-16 04:22:50.743258 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.29s 2026-02-16 04:22:50.743274 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.15s 2026-02-16 04:22:50.743287 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 2.82s 2026-02-16 04:22:53.123953 | orchestrator | 2026-02-16 04:22:53 | INFO  | Task bceb21d2-5c7c-4ab1-a941-9dcd65e3cdff (octavia) was prepared for execution. 2026-02-16 04:22:53.124063 | orchestrator | 2026-02-16 04:22:53 | INFO  | It takes a moment until task bceb21d2-5c7c-4ab1-a941-9dcd65e3cdff (octavia) has been started and output is visible here. 2026-02-16 04:25:00.273379 | orchestrator | 2026-02-16 04:25:00.273512 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 04:25:00.273531 | orchestrator | 2026-02-16 04:25:00.273544 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 04:25:00.273555 | orchestrator | Monday 16 February 2026 04:22:57 +0000 (0:00:00.253) 0:00:00.253 ******* 2026-02-16 04:25:00.273567 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:25:00.273579 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:25:00.273590 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:25:00.273601 | orchestrator | 2026-02-16 04:25:00.273612 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 04:25:00.273624 | orchestrator | Monday 16 February 2026 04:22:57 +0000 (0:00:00.317) 0:00:00.571 ******* 2026-02-16 04:25:00.273635 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-16 04:25:00.273646 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-16 04:25:00.273657 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-16 04:25:00.273668 | orchestrator | 2026-02-16 04:25:00.273734 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-16 04:25:00.273747 | orchestrator | 2026-02-16 04:25:00.273758 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-16 04:25:00.273770 | orchestrator | Monday 16 February 2026 04:22:58 +0000 (0:00:00.430) 0:00:01.002 ******* 2026-02-16 04:25:00.273781 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:25:00.273793 | orchestrator | 2026-02-16 04:25:00.273805 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-16 04:25:00.273816 | orchestrator | Monday 16 February 2026 04:22:58 +0000 (0:00:00.596) 0:00:01.598 ******* 2026-02-16 04:25:00.273827 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-16 04:25:00.273838 | orchestrator | 2026-02-16 04:25:00.273849 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-16 04:25:00.273860 | orchestrator | Monday 16 February 2026 04:23:02 +0000 (0:00:03.509) 0:00:05.108 ******* 2026-02-16 04:25:00.273871 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-16 04:25:00.273882 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-16 04:25:00.273893 | orchestrator | 2026-02-16 04:25:00.273904 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-16 04:25:00.273916 | orchestrator | Monday 16 February 2026 04:23:08 +0000 (0:00:06.594) 0:00:11.702 ******* 2026-02-16 04:25:00.273927 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-16 04:25:00.273938 | orchestrator | 2026-02-16 04:25:00.273949 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-16 04:25:00.273960 | orchestrator | Monday 16 February 2026 04:23:11 +0000 (0:00:03.225) 0:00:14.928 ******* 2026-02-16 04:25:00.273971 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-16 04:25:00.273983 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-16 04:25:00.273994 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-16 04:25:00.274005 | orchestrator | 2026-02-16 04:25:00.274078 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-16 04:25:00.274093 | orchestrator | Monday 16 February 2026 04:23:20 +0000 (0:00:08.537) 0:00:23.465 ******* 2026-02-16 04:25:00.274104 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-16 04:25:00.274116 | orchestrator | 2026-02-16 04:25:00.274143 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-16 04:25:00.274155 | orchestrator | Monday 16 February 2026 04:23:23 +0000 (0:00:03.248) 0:00:26.714 ******* 2026-02-16 04:25:00.274189 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-16 04:25:00.274200 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-16 04:25:00.274211 | orchestrator | 2026-02-16 04:25:00.274222 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-16 04:25:00.274233 | orchestrator | Monday 16 February 2026 04:23:31 +0000 (0:00:07.280) 0:00:33.994 ******* 2026-02-16 04:25:00.274244 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-16 04:25:00.274255 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-16 04:25:00.274266 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-16 04:25:00.274277 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-16 04:25:00.274287 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-16 04:25:00.274298 | orchestrator | 2026-02-16 04:25:00.274309 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-16 04:25:00.274320 | orchestrator | Monday 16 February 2026 04:23:46 +0000 (0:00:15.607) 0:00:49.602 ******* 2026-02-16 04:25:00.274331 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:25:00.274347 | orchestrator | 2026-02-16 04:25:00.274366 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-16 04:25:00.274384 | orchestrator | Monday 16 February 2026 04:23:47 +0000 (0:00:00.751) 0:00:50.354 ******* 2026-02-16 04:25:00.274400 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:25:00.274420 | orchestrator | 2026-02-16 04:25:00.274438 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-16 04:25:00.274457 | orchestrator | Monday 16 February 2026 04:23:52 +0000 (0:00:04.883) 0:00:55.237 ******* 2026-02-16 04:25:00.274469 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:25:00.274480 | orchestrator | 2026-02-16 04:25:00.274491 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-16 04:25:00.274523 | orchestrator | Monday 16 February 2026 04:23:56 +0000 (0:00:04.012) 0:00:59.250 ******* 2026-02-16 04:25:00.274535 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:25:00.274546 | orchestrator | 2026-02-16 04:25:00.274557 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-16 04:25:00.274568 | orchestrator | Monday 16 February 2026 04:23:59 +0000 (0:00:03.202) 0:01:02.453 ******* 2026-02-16 04:25:00.274578 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-16 04:25:00.274589 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-16 04:25:00.274600 | orchestrator | 2026-02-16 04:25:00.274611 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-16 04:25:00.274622 | orchestrator | Monday 16 February 2026 04:24:09 +0000 (0:00:10.469) 0:01:12.923 ******* 2026-02-16 04:25:00.274633 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-16 04:25:00.274644 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-16 04:25:00.274656 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-16 04:25:00.274668 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-16 04:25:00.274736 | orchestrator | 2026-02-16 04:25:00.274748 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-16 04:25:00.274759 | orchestrator | Monday 16 February 2026 04:24:25 +0000 (0:00:15.305) 0:01:28.228 ******* 2026-02-16 04:25:00.274770 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:25:00.274781 | orchestrator | 2026-02-16 04:25:00.274792 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-16 04:25:00.274812 | orchestrator | Monday 16 February 2026 04:24:29 +0000 (0:00:04.658) 0:01:32.887 ******* 2026-02-16 04:25:00.274824 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:25:00.274835 | orchestrator | 2026-02-16 04:25:00.274846 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-16 04:25:00.274857 | orchestrator | Monday 16 February 2026 04:24:35 +0000 (0:00:05.295) 0:01:38.183 ******* 2026-02-16 04:25:00.274867 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:25:00.274878 | orchestrator | 2026-02-16 04:25:00.274890 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-16 04:25:00.274901 | orchestrator | Monday 16 February 2026 04:24:35 +0000 (0:00:00.196) 0:01:38.379 ******* 2026-02-16 04:25:00.274912 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:25:00.274923 | orchestrator | 2026-02-16 04:25:00.274939 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-16 04:25:00.274959 | orchestrator | Monday 16 February 2026 04:24:39 +0000 (0:00:04.538) 0:01:42.918 ******* 2026-02-16 04:25:00.274978 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:25:00.275010 | orchestrator | 2026-02-16 04:25:00.275022 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-16 04:25:00.275033 | orchestrator | Monday 16 February 2026 04:24:41 +0000 (0:00:01.145) 0:01:44.064 ******* 2026-02-16 04:25:00.275044 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:25:00.275055 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:25:00.275066 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:25:00.275076 | orchestrator | 2026-02-16 04:25:00.275088 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-16 04:25:00.275105 | orchestrator | Monday 16 February 2026 04:24:47 +0000 (0:00:05.949) 0:01:50.013 ******* 2026-02-16 04:25:00.275116 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:25:00.275127 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:25:00.275138 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:25:00.275153 | orchestrator | 2026-02-16 04:25:00.275171 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-16 04:25:00.275191 | orchestrator | Monday 16 February 2026 04:24:52 +0000 (0:00:05.600) 0:01:55.614 ******* 2026-02-16 04:25:00.275205 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:25:00.275216 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:25:00.275227 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:25:00.275238 | orchestrator | 2026-02-16 04:25:00.275249 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-16 04:25:00.275260 | orchestrator | Monday 16 February 2026 04:24:53 +0000 (0:00:01.069) 0:01:56.683 ******* 2026-02-16 04:25:00.275270 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:25:00.275281 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:25:00.275292 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:25:00.275303 | orchestrator | 2026-02-16 04:25:00.275314 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-16 04:25:00.275325 | orchestrator | Monday 16 February 2026 04:24:55 +0000 (0:00:01.729) 0:01:58.413 ******* 2026-02-16 04:25:00.275335 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:25:00.275346 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:25:00.275357 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:25:00.275368 | orchestrator | 2026-02-16 04:25:00.275379 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-16 04:25:00.275390 | orchestrator | Monday 16 February 2026 04:24:56 +0000 (0:00:01.302) 0:01:59.715 ******* 2026-02-16 04:25:00.275455 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:25:00.275468 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:25:00.275479 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:25:00.275490 | orchestrator | 2026-02-16 04:25:00.275501 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-16 04:25:00.275512 | orchestrator | Monday 16 February 2026 04:24:57 +0000 (0:00:01.210) 0:02:00.925 ******* 2026-02-16 04:25:00.275531 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:25:00.275542 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:25:00.275553 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:25:00.275564 | orchestrator | 2026-02-16 04:25:00.275585 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-16 04:25:25.824473 | orchestrator | Monday 16 February 2026 04:25:00 +0000 (0:00:02.260) 0:02:03.186 ******* 2026-02-16 04:25:25.824619 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:25:25.824638 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:25:25.824650 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:25:25.824662 | orchestrator | 2026-02-16 04:25:25.824675 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-16 04:25:25.824687 | orchestrator | Monday 16 February 2026 04:25:01 +0000 (0:00:01.525) 0:02:04.712 ******* 2026-02-16 04:25:25.824698 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:25:25.824710 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:25:25.824789 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:25:25.824809 | orchestrator | 2026-02-16 04:25:25.824827 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-16 04:25:25.824846 | orchestrator | Monday 16 February 2026 04:25:02 +0000 (0:00:00.640) 0:02:05.353 ******* 2026-02-16 04:25:25.824864 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:25:25.824882 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:25:25.824897 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:25:25.824913 | orchestrator | 2026-02-16 04:25:25.824932 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-16 04:25:25.824950 | orchestrator | Monday 16 February 2026 04:25:05 +0000 (0:00:03.109) 0:02:08.463 ******* 2026-02-16 04:25:25.824970 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:25:25.824987 | orchestrator | 2026-02-16 04:25:25.824998 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-16 04:25:25.825011 | orchestrator | Monday 16 February 2026 04:25:06 +0000 (0:00:00.537) 0:02:09.000 ******* 2026-02-16 04:25:25.825024 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:25:25.825036 | orchestrator | 2026-02-16 04:25:25.825048 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-16 04:25:25.825060 | orchestrator | Monday 16 February 2026 04:25:09 +0000 (0:00:03.542) 0:02:12.542 ******* 2026-02-16 04:25:25.825072 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:25:25.825084 | orchestrator | 2026-02-16 04:25:25.825098 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-16 04:25:25.825111 | orchestrator | Monday 16 February 2026 04:25:12 +0000 (0:00:03.237) 0:02:15.780 ******* 2026-02-16 04:25:25.825123 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-16 04:25:25.825136 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-16 04:25:25.825148 | orchestrator | 2026-02-16 04:25:25.825161 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-16 04:25:25.825174 | orchestrator | Monday 16 February 2026 04:25:19 +0000 (0:00:06.738) 0:02:22.518 ******* 2026-02-16 04:25:25.825186 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:25:25.825198 | orchestrator | 2026-02-16 04:25:25.825211 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-16 04:25:25.825224 | orchestrator | Monday 16 February 2026 04:25:23 +0000 (0:00:03.808) 0:02:26.327 ******* 2026-02-16 04:25:25.825236 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:25:25.825248 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:25:25.825260 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:25:25.825272 | orchestrator | 2026-02-16 04:25:25.825285 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-16 04:25:25.825298 | orchestrator | Monday 16 February 2026 04:25:23 +0000 (0:00:00.497) 0:02:26.825 ******* 2026-02-16 04:25:25.825330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 04:25:25.825394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 04:25:25.825408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 04:25:25.825472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-16 04:25:25.825485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-16 04:25:25.825503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-16 04:25:25.825525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-16 04:25:25.825537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-16 04:25:25.825557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-16 04:25:27.243790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-16 04:25:27.243896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-16 04:25:27.243911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-16 04:25:27.243965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:25:27.243979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:25:27.243990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:25:27.244002 | orchestrator | 2026-02-16 04:25:27.244016 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-16 04:25:27.244029 | orchestrator | Monday 16 February 2026 04:25:26 +0000 (0:00:02.334) 0:02:29.159 ******* 2026-02-16 04:25:27.244039 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:25:27.244052 | orchestrator | 2026-02-16 04:25:27.244063 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-16 04:25:27.244074 | orchestrator | Monday 16 February 2026 04:25:26 +0000 (0:00:00.141) 0:02:29.300 ******* 2026-02-16 04:25:27.244085 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:25:27.244114 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:25:27.244126 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:25:27.244137 | orchestrator | 2026-02-16 04:25:27.244148 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-16 04:25:27.244159 | orchestrator | Monday 16 February 2026 04:25:26 +0000 (0:00:00.326) 0:02:29.627 ******* 2026-02-16 04:25:27.244171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-16 04:25:27.244191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 04:25:27.244208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 04:25:27.244221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 04:25:27.244232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:25:27.244243 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:25:27.244264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-16 04:25:31.972876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 04:25:31.973015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 04:25:31.973047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 04:25:31.973062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:25:31.973074 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:25:31.973088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-16 04:25:31.973102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 04:25:31.973132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 04:25:31.973158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 04:25:31.973175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:25:31.973187 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:25:31.973199 | orchestrator | 2026-02-16 04:25:31.973221 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-16 04:25:31.973241 | orchestrator | Monday 16 February 2026 04:25:27 +0000 (0:00:00.650) 0:02:30.277 ******* 2026-02-16 04:25:31.973258 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:25:31.973277 | orchestrator | 2026-02-16 04:25:31.973295 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-16 04:25:31.973313 | orchestrator | Monday 16 February 2026 04:25:28 +0000 (0:00:00.697) 0:02:30.974 ******* 2026-02-16 04:25:31.973368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 04:25:31.973390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 04:25:31.973439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 04:25:33.426092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-16 04:25:33.426185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-16 04:25:33.426195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-16 04:25:33.426203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-16 04:25:33.426211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-16 04:25:33.426218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-16 04:25:33.426255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-16 04:25:33.426263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-16 04:25:33.426273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-16 04:25:33.426280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:25:33.426287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:25:33.426294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:25:33.426306 | orchestrator | 2026-02-16 04:25:33.426314 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-16 04:25:33.426322 | orchestrator | Monday 16 February 2026 04:25:32 +0000 (0:00:04.850) 0:02:35.825 ******* 2026-02-16 04:25:33.426335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-16 04:25:33.528023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 04:25:33.528156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 04:25:33.528173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 04:25:33.528186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:25:33.528219 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:25:33.528234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-16 04:25:33.528247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 04:25:33.528276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 04:25:33.528294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 04:25:33.528306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:25:33.528317 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:25:33.528329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-16 04:25:33.528348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 04:25:33.528360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 04:25:33.528379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 04:25:34.278652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:25:34.278815 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:25:34.278842 | orchestrator | 2026-02-16 04:25:34.278860 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-16 04:25:34.278893 | orchestrator | Monday 16 February 2026 04:25:33 +0000 (0:00:00.626) 0:02:36.452 ******* 2026-02-16 04:25:34.278912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-16 04:25:34.278967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 04:25:34.278978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 04:25:34.278988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 04:25:34.279017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:25:34.279027 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:25:34.279042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-16 04:25:34.279052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 04:25:34.279067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 04:25:34.279077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 04:25:34.279086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:25:34.279095 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:25:34.279114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-16 04:25:38.808539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 04:25:38.808631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 04:25:38.808663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 04:25:38.808673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 04:25:38.808682 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:25:38.808692 | orchestrator | 2026-02-16 04:25:38.808700 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-16 04:25:38.808709 | orchestrator | Monday 16 February 2026 04:25:34 +0000 (0:00:01.225) 0:02:37.677 ******* 2026-02-16 04:25:38.808718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 04:25:38.808822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 04:25:38.808834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 04:25:38.808849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-16 04:25:38.808858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-16 04:25:38.808865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-16 04:25:38.808956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-16 04:25:38.808982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-16 04:25:54.274944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-16 04:25:54.275077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-16 04:25:54.275101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-16 04:25:54.275110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-16 04:25:54.275119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:25:54.275128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:25:54.275165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:25:54.275184 | orchestrator | 2026-02-16 04:25:54.275196 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-16 04:25:54.275206 | orchestrator | Monday 16 February 2026 04:25:39 +0000 (0:00:05.003) 0:02:42.680 ******* 2026-02-16 04:25:54.275215 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-16 04:25:54.275225 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-16 04:25:54.275233 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-16 04:25:54.275242 | orchestrator | 2026-02-16 04:25:54.275251 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-16 04:25:54.275260 | orchestrator | Monday 16 February 2026 04:25:41 +0000 (0:00:01.616) 0:02:44.297 ******* 2026-02-16 04:25:54.275270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 04:25:54.275281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 04:25:54.275290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 04:25:54.275311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-16 04:26:09.313141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-16 04:26:09.313266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-16 04:26:09.313289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-16 04:26:09.313307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-16 04:26:09.313323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-16 04:26:09.313339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-16 04:26:09.313421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-16 04:26:09.313441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-16 04:26:09.313457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:26:09.313474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:26:09.313489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:26:09.313505 | orchestrator | 2026-02-16 04:26:09.313522 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-16 04:26:09.313538 | orchestrator | Monday 16 February 2026 04:25:57 +0000 (0:00:16.030) 0:03:00.328 ******* 2026-02-16 04:26:09.313553 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:26:09.313563 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:26:09.313572 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:26:09.313580 | orchestrator | 2026-02-16 04:26:09.313588 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-16 04:26:09.313596 | orchestrator | Monday 16 February 2026 04:25:59 +0000 (0:00:01.699) 0:03:02.027 ******* 2026-02-16 04:26:09.313605 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-16 04:26:09.313613 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-16 04:26:09.313630 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-16 04:26:09.313638 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-16 04:26:09.313646 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-16 04:26:09.313654 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-16 04:26:09.313662 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-16 04:26:09.313669 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-16 04:26:09.313677 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-16 04:26:09.313686 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-16 04:26:09.313699 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-16 04:26:09.313712 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-16 04:26:09.313725 | orchestrator | 2026-02-16 04:26:09.313739 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-16 04:26:09.313759 | orchestrator | Monday 16 February 2026 04:26:04 +0000 (0:00:04.977) 0:03:07.005 ******* 2026-02-16 04:26:09.313774 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-16 04:26:09.313816 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-16 04:26:09.313841 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-16 04:26:17.470307 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-16 04:26:17.470425 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-16 04:26:17.470442 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-16 04:26:17.470453 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-16 04:26:17.470464 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-16 04:26:17.470475 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-16 04:26:17.470486 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-16 04:26:17.470497 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-16 04:26:17.470508 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-16 04:26:17.470519 | orchestrator | 2026-02-16 04:26:17.470532 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-16 04:26:17.470544 | orchestrator | Monday 16 February 2026 04:26:09 +0000 (0:00:05.225) 0:03:12.231 ******* 2026-02-16 04:26:17.470554 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-16 04:26:17.470565 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-16 04:26:17.470576 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-16 04:26:17.470587 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-16 04:26:17.470598 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-16 04:26:17.470608 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-16 04:26:17.470619 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-16 04:26:17.470630 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-16 04:26:17.470640 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-16 04:26:17.470651 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-16 04:26:17.470662 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-16 04:26:17.470673 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-16 04:26:17.470684 | orchestrator | 2026-02-16 04:26:17.470695 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-16 04:26:17.470706 | orchestrator | Monday 16 February 2026 04:26:14 +0000 (0:00:05.124) 0:03:17.355 ******* 2026-02-16 04:26:17.470721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 04:26:17.470758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 04:26:17.470841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 04:26:17.470859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-16 04:26:17.470873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-16 04:26:17.470886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-16 04:26:17.470908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-16 04:26:17.470921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-16 04:26:17.470938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-16 04:26:17.470959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-16 04:27:41.445248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-16 04:27:41.445353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-16 04:27:41.445391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:27:41.445405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:27:41.445417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-16 04:27:41.445429 | orchestrator | 2026-02-16 04:27:41.445442 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-16 04:27:41.445455 | orchestrator | Monday 16 February 2026 04:26:18 +0000 (0:00:03.743) 0:03:21.099 ******* 2026-02-16 04:27:41.445466 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:27:41.445477 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:27:41.445488 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:27:41.445499 | orchestrator | 2026-02-16 04:27:41.445522 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-16 04:27:41.445534 | orchestrator | Monday 16 February 2026 04:26:18 +0000 (0:00:00.513) 0:03:21.613 ******* 2026-02-16 04:27:41.445545 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:27:41.445556 | orchestrator | 2026-02-16 04:27:41.445567 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-16 04:27:41.445578 | orchestrator | Monday 16 February 2026 04:26:20 +0000 (0:00:02.108) 0:03:23.721 ******* 2026-02-16 04:27:41.445589 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:27:41.445600 | orchestrator | 2026-02-16 04:27:41.445611 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-16 04:27:41.445622 | orchestrator | Monday 16 February 2026 04:26:22 +0000 (0:00:02.199) 0:03:25.920 ******* 2026-02-16 04:27:41.445633 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:27:41.445644 | orchestrator | 2026-02-16 04:27:41.445655 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-16 04:27:41.445667 | orchestrator | Monday 16 February 2026 04:26:25 +0000 (0:00:02.243) 0:03:28.164 ******* 2026-02-16 04:27:41.445693 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:27:41.445705 | orchestrator | 2026-02-16 04:27:41.445716 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-16 04:27:41.445727 | orchestrator | Monday 16 February 2026 04:26:27 +0000 (0:00:02.234) 0:03:30.399 ******* 2026-02-16 04:27:41.445746 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:27:41.445757 | orchestrator | 2026-02-16 04:27:41.445768 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-16 04:27:41.445779 | orchestrator | Monday 16 February 2026 04:26:50 +0000 (0:00:22.570) 0:03:52.969 ******* 2026-02-16 04:27:41.445789 | orchestrator | 2026-02-16 04:27:41.445802 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-16 04:27:41.445814 | orchestrator | Monday 16 February 2026 04:26:50 +0000 (0:00:00.069) 0:03:53.038 ******* 2026-02-16 04:27:41.445826 | orchestrator | 2026-02-16 04:27:41.445838 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-16 04:27:41.445851 | orchestrator | Monday 16 February 2026 04:26:50 +0000 (0:00:00.068) 0:03:53.106 ******* 2026-02-16 04:27:41.445863 | orchestrator | 2026-02-16 04:27:41.445875 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-16 04:27:41.445929 | orchestrator | Monday 16 February 2026 04:26:50 +0000 (0:00:00.068) 0:03:53.174 ******* 2026-02-16 04:27:41.445945 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:27:41.445958 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:27:41.445971 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:27:41.445983 | orchestrator | 2026-02-16 04:27:41.445996 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-16 04:27:41.446009 | orchestrator | Monday 16 February 2026 04:27:06 +0000 (0:00:15.813) 0:04:08.988 ******* 2026-02-16 04:27:41.446066 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:27:41.446079 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:27:41.446091 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:27:41.446103 | orchestrator | 2026-02-16 04:27:41.446116 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-16 04:27:41.446129 | orchestrator | Monday 16 February 2026 04:27:17 +0000 (0:00:11.132) 0:04:20.120 ******* 2026-02-16 04:27:41.446141 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:27:41.446153 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:27:41.446166 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:27:41.446178 | orchestrator | 2026-02-16 04:27:41.446189 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-16 04:27:41.446200 | orchestrator | Monday 16 February 2026 04:27:22 +0000 (0:00:05.407) 0:04:25.527 ******* 2026-02-16 04:27:41.446211 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:27:41.446221 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:27:41.446232 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:27:41.446243 | orchestrator | 2026-02-16 04:27:41.446254 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-16 04:27:41.446265 | orchestrator | Monday 16 February 2026 04:27:30 +0000 (0:00:08.380) 0:04:33.908 ******* 2026-02-16 04:27:41.446276 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:27:41.446287 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:27:41.446297 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:27:41.446308 | orchestrator | 2026-02-16 04:27:41.446319 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:27:41.446330 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-16 04:27:41.446342 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-16 04:27:41.446353 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-16 04:27:41.446364 | orchestrator | 2026-02-16 04:27:41.446375 | orchestrator | 2026-02-16 04:27:41.446386 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:27:41.446397 | orchestrator | Monday 16 February 2026 04:27:41 +0000 (0:00:10.437) 0:04:44.346 ******* 2026-02-16 04:27:41.446408 | orchestrator | =============================================================================== 2026-02-16 04:27:41.446427 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.57s 2026-02-16 04:27:41.446438 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.03s 2026-02-16 04:27:41.446449 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.81s 2026-02-16 04:27:41.446460 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.61s 2026-02-16 04:27:41.446471 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.31s 2026-02-16 04:27:41.446487 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.13s 2026-02-16 04:27:41.446498 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.47s 2026-02-16 04:27:41.446509 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.44s 2026-02-16 04:27:41.446520 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.54s 2026-02-16 04:27:41.446530 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.38s 2026-02-16 04:27:41.446541 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.28s 2026-02-16 04:27:41.446552 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.74s 2026-02-16 04:27:41.446563 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.59s 2026-02-16 04:27:41.446574 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.95s 2026-02-16 04:27:41.446592 | orchestrator | octavia : Update Octavia health manager port host_id -------------------- 5.60s 2026-02-16 04:27:41.795256 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.41s 2026-02-16 04:27:41.795359 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.30s 2026-02-16 04:27:41.795373 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.23s 2026-02-16 04:27:41.795384 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.12s 2026-02-16 04:27:41.795395 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.00s 2026-02-16 04:27:44.150632 | orchestrator | 2026-02-16 04:27:44 | INFO  | Task bf7139c2-6b3a-49e7-b360-a7ef6f2471a4 (ceilometer) was prepared for execution. 2026-02-16 04:27:44.150751 | orchestrator | 2026-02-16 04:27:44 | INFO  | It takes a moment until task bf7139c2-6b3a-49e7-b360-a7ef6f2471a4 (ceilometer) has been started and output is visible here. 2026-02-16 04:28:07.055379 | orchestrator | 2026-02-16 04:28:07.055479 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 04:28:07.055493 | orchestrator | 2026-02-16 04:28:07.055501 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 04:28:07.055508 | orchestrator | Monday 16 February 2026 04:27:48 +0000 (0:00:00.269) 0:00:00.269 ******* 2026-02-16 04:28:07.055514 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:28:07.055522 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:28:07.055529 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:28:07.055535 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:28:07.055543 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:28:07.055547 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:28:07.055551 | orchestrator | 2026-02-16 04:28:07.055555 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 04:28:07.055560 | orchestrator | Monday 16 February 2026 04:27:48 +0000 (0:00:00.691) 0:00:00.960 ******* 2026-02-16 04:28:07.055565 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-02-16 04:28:07.055569 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-02-16 04:28:07.055573 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-02-16 04:28:07.055577 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-02-16 04:28:07.055581 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-02-16 04:28:07.055604 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-02-16 04:28:07.055609 | orchestrator | 2026-02-16 04:28:07.055613 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-02-16 04:28:07.055616 | orchestrator | 2026-02-16 04:28:07.055620 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-16 04:28:07.055624 | orchestrator | Monday 16 February 2026 04:27:49 +0000 (0:00:00.576) 0:00:01.537 ******* 2026-02-16 04:28:07.055629 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 04:28:07.055635 | orchestrator | 2026-02-16 04:28:07.055639 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-02-16 04:28:07.055642 | orchestrator | Monday 16 February 2026 04:27:50 +0000 (0:00:01.110) 0:00:02.648 ******* 2026-02-16 04:28:07.055646 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:28:07.055650 | orchestrator | 2026-02-16 04:28:07.055654 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-02-16 04:28:07.055658 | orchestrator | Monday 16 February 2026 04:27:50 +0000 (0:00:00.102) 0:00:02.751 ******* 2026-02-16 04:28:07.055662 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:28:07.055665 | orchestrator | 2026-02-16 04:28:07.055669 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-02-16 04:28:07.055673 | orchestrator | Monday 16 February 2026 04:27:50 +0000 (0:00:00.113) 0:00:02.864 ******* 2026-02-16 04:28:07.055677 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-16 04:28:07.055681 | orchestrator | 2026-02-16 04:28:07.055685 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-02-16 04:28:07.055689 | orchestrator | Monday 16 February 2026 04:27:54 +0000 (0:00:03.596) 0:00:06.461 ******* 2026-02-16 04:28:07.055692 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-16 04:28:07.055696 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-02-16 04:28:07.055700 | orchestrator | 2026-02-16 04:28:07.055704 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-02-16 04:28:07.055708 | orchestrator | Monday 16 February 2026 04:27:58 +0000 (0:00:03.799) 0:00:10.260 ******* 2026-02-16 04:28:07.055712 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-16 04:28:07.055716 | orchestrator | 2026-02-16 04:28:07.055720 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-02-16 04:28:07.055734 | orchestrator | Monday 16 February 2026 04:28:01 +0000 (0:00:03.141) 0:00:13.401 ******* 2026-02-16 04:28:07.055738 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-02-16 04:28:07.055742 | orchestrator | 2026-02-16 04:28:07.055746 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-02-16 04:28:07.055749 | orchestrator | Monday 16 February 2026 04:28:05 +0000 (0:00:04.091) 0:00:17.492 ******* 2026-02-16 04:28:07.055753 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:28:07.055757 | orchestrator | 2026-02-16 04:28:07.055761 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-02-16 04:28:07.055765 | orchestrator | Monday 16 February 2026 04:28:05 +0000 (0:00:00.132) 0:00:17.625 ******* 2026-02-16 04:28:07.055770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:07.055790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:07.055799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:07.055804 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:07.055811 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:07.055818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-16 04:28:07.055823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-16 04:28:07.055831 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:11.543982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-16 04:28:11.544100 | orchestrator | 2026-02-16 04:28:11.544118 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-02-16 04:28:11.544132 | orchestrator | Monday 16 February 2026 04:28:07 +0000 (0:00:01.377) 0:00:19.002 ******* 2026-02-16 04:28:11.544143 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-16 04:28:11.544156 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 04:28:11.544166 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-16 04:28:11.544177 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-16 04:28:11.544188 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-16 04:28:11.544199 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-16 04:28:11.544209 | orchestrator | 2026-02-16 04:28:11.544221 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-02-16 04:28:11.544233 | orchestrator | Monday 16 February 2026 04:28:08 +0000 (0:00:01.664) 0:00:20.667 ******* 2026-02-16 04:28:11.544244 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:28:11.544255 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:28:11.544266 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:28:11.544277 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:28:11.544287 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:28:11.544298 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:28:11.544309 | orchestrator | 2026-02-16 04:28:11.544320 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-02-16 04:28:11.544331 | orchestrator | Monday 16 February 2026 04:28:09 +0000 (0:00:00.570) 0:00:21.238 ******* 2026-02-16 04:28:11.544342 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:28:11.544352 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:28:11.544363 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:28:11.544375 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:28:11.544386 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:28:11.544397 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:28:11.544408 | orchestrator | 2026-02-16 04:28:11.544419 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-02-16 04:28:11.544430 | orchestrator | Monday 16 February 2026 04:28:10 +0000 (0:00:00.737) 0:00:21.976 ******* 2026-02-16 04:28:11.544441 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:28:11.544454 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:28:11.544466 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:28:11.544478 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:28:11.544491 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:28:11.544503 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:28:11.544516 | orchestrator | 2026-02-16 04:28:11.544529 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-02-16 04:28:11.544584 | orchestrator | Monday 16 February 2026 04:28:10 +0000 (0:00:00.615) 0:00:22.591 ******* 2026-02-16 04:28:11.544598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:11.544612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:11.544624 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:28:11.544656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:11.544668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:11.544680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:11.544691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:11.544711 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:28:11.544742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:11.544754 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:28:11.544766 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:28:11.544777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:11.544788 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:28:11.544809 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:15.913851 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:28:15.914090 | orchestrator | 2026-02-16 04:28:15.914112 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-02-16 04:28:15.914128 | orchestrator | Monday 16 February 2026 04:28:11 +0000 (0:00:00.906) 0:00:23.497 ******* 2026-02-16 04:28:15.914145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:15.914163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:15.914201 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:28:15.914231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:15.914246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:15.914259 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:28:15.914273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:15.914287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:15.914301 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:28:15.914335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:15.914350 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:28:15.914364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:15.914386 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:28:15.914405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:15.914419 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:28:15.914432 | orchestrator | 2026-02-16 04:28:15.914449 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-02-16 04:28:15.914465 | orchestrator | Monday 16 February 2026 04:28:12 +0000 (0:00:00.724) 0:00:24.221 ******* 2026-02-16 04:28:15.914480 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 04:28:15.914494 | orchestrator | 2026-02-16 04:28:15.914506 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-02-16 04:28:15.914520 | orchestrator | Monday 16 February 2026 04:28:12 +0000 (0:00:00.660) 0:00:24.882 ******* 2026-02-16 04:28:15.914532 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:28:15.914546 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:28:15.914559 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:28:15.914572 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:28:15.914585 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:28:15.914597 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:28:15.914608 | orchestrator | 2026-02-16 04:28:15.914621 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-02-16 04:28:15.914635 | orchestrator | Monday 16 February 2026 04:28:13 +0000 (0:00:00.675) 0:00:25.557 ******* 2026-02-16 04:28:15.914647 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:28:15.914659 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:28:15.914672 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:28:15.914684 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:28:15.914697 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:28:15.914709 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:28:15.914722 | orchestrator | 2026-02-16 04:28:15.914736 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-02-16 04:28:15.914749 | orchestrator | Monday 16 February 2026 04:28:14 +0000 (0:00:00.903) 0:00:26.461 ******* 2026-02-16 04:28:15.914761 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:28:15.914774 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:28:15.914786 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:28:15.914797 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:28:15.914810 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:28:15.914823 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:28:15.914836 | orchestrator | 2026-02-16 04:28:15.914849 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-02-16 04:28:15.914862 | orchestrator | Monday 16 February 2026 04:28:15 +0000 (0:00:00.818) 0:00:27.279 ******* 2026-02-16 04:28:15.914877 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:28:15.914891 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:28:15.914904 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:28:15.914991 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:28:15.915008 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:28:15.915021 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:28:15.915032 | orchestrator | 2026-02-16 04:28:20.544852 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-02-16 04:28:20.544943 | orchestrator | Monday 16 February 2026 04:28:15 +0000 (0:00:00.589) 0:00:27.869 ******* 2026-02-16 04:28:20.544953 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 04:28:20.544961 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-16 04:28:20.544968 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-16 04:28:20.544974 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-16 04:28:20.544980 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-16 04:28:20.544986 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-16 04:28:20.544993 | orchestrator | 2026-02-16 04:28:20.545001 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-02-16 04:28:20.545008 | orchestrator | Monday 16 February 2026 04:28:17 +0000 (0:00:01.505) 0:00:29.374 ******* 2026-02-16 04:28:20.545018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:20.545028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:20.545035 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:28:20.545051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:20.545056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:20.545060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:20.545092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:20.545097 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:28:20.545112 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:28:20.545117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:20.545122 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:28:20.545126 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:20.545130 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:28:20.545136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:20.545140 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:28:20.545144 | orchestrator | 2026-02-16 04:28:20.545148 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-02-16 04:28:20.545152 | orchestrator | Monday 16 February 2026 04:28:18 +0000 (0:00:00.957) 0:00:30.332 ******* 2026-02-16 04:28:20.545156 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:28:20.545159 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:28:20.545167 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:28:20.545171 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:28:20.545175 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:28:20.545178 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:28:20.545182 | orchestrator | 2026-02-16 04:28:20.545186 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-02-16 04:28:20.545190 | orchestrator | Monday 16 February 2026 04:28:19 +0000 (0:00:00.638) 0:00:30.970 ******* 2026-02-16 04:28:20.545193 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 04:28:20.545197 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-16 04:28:20.545201 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-16 04:28:20.545205 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-16 04:28:20.545208 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-16 04:28:20.545212 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-16 04:28:20.545216 | orchestrator | 2026-02-16 04:28:20.545220 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-02-16 04:28:20.545223 | orchestrator | Monday 16 February 2026 04:28:20 +0000 (0:00:01.153) 0:00:32.123 ******* 2026-02-16 04:28:20.545232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:25.462896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:25.463066 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:28:25.463087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:25.463118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:25.463131 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:28:25.463143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:25.463218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:25.463232 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:28:25.463245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:25.463258 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:28:25.463299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:25.463312 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:28:25.463324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:25.463335 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:28:25.463346 | orchestrator | 2026-02-16 04:28:25.463358 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-02-16 04:28:25.463370 | orchestrator | Monday 16 February 2026 04:28:21 +0000 (0:00:00.856) 0:00:32.980 ******* 2026-02-16 04:28:25.463381 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:28:25.463392 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:28:25.463413 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:28:25.463425 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:28:25.463445 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:28:25.463458 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:28:25.463470 | orchestrator | 2026-02-16 04:28:25.463483 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-02-16 04:28:25.463495 | orchestrator | Monday 16 February 2026 04:28:21 +0000 (0:00:00.608) 0:00:33.588 ******* 2026-02-16 04:28:25.463507 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:28:25.463519 | orchestrator | 2026-02-16 04:28:25.463532 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-02-16 04:28:25.463544 | orchestrator | Monday 16 February 2026 04:28:21 +0000 (0:00:00.118) 0:00:33.706 ******* 2026-02-16 04:28:25.463557 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:28:25.463569 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:28:25.463581 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:28:25.463593 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:28:25.463606 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:28:25.463618 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:28:25.463630 | orchestrator | 2026-02-16 04:28:25.463642 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-16 04:28:25.463654 | orchestrator | Monday 16 February 2026 04:28:22 +0000 (0:00:00.510) 0:00:34.217 ******* 2026-02-16 04:28:25.463668 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 04:28:25.463681 | orchestrator | 2026-02-16 04:28:25.463693 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-02-16 04:28:25.463706 | orchestrator | Monday 16 February 2026 04:28:23 +0000 (0:00:01.085) 0:00:35.302 ******* 2026-02-16 04:28:25.463718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:25.463740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:25.939445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:25.939543 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:25.939595 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:25.939609 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:25.939621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-16 04:28:25.939632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-16 04:28:25.939662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-16 04:28:25.939674 | orchestrator | 2026-02-16 04:28:25.939693 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-02-16 04:28:25.939706 | orchestrator | Monday 16 February 2026 04:28:25 +0000 (0:00:02.112) 0:00:37.415 ******* 2026-02-16 04:28:25.939718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:25.939740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:25.939753 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:28:25.939765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:25.939776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:25.939787 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:28:25.939798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:25.939816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:27.771604 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:28:27.771708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:27.771726 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:28:27.771756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:27.771769 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:28:27.771780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:27.771792 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:28:27.771803 | orchestrator | 2026-02-16 04:28:27.771815 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-02-16 04:28:27.771827 | orchestrator | Monday 16 February 2026 04:28:26 +0000 (0:00:00.794) 0:00:38.209 ******* 2026-02-16 04:28:27.771839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:27.771852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:27.771902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:27.771915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:27.771983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:27.771997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:27.772008 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:28:27.772019 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:28:27.772030 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:28:27.772041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:27.772053 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:28:27.772064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:27.772082 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:28:27.772103 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:34.514329 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:28:34.514442 | orchestrator | 2026-02-16 04:28:34.514460 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-02-16 04:28:34.514474 | orchestrator | Monday 16 February 2026 04:28:27 +0000 (0:00:01.515) 0:00:39.725 ******* 2026-02-16 04:28:34.514504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:34.514520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:34.514532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:34.514545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:34.514580 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:34.514610 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:34.514629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-16 04:28:34.514642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-16 04:28:34.514653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-16 04:28:34.514665 | orchestrator | 2026-02-16 04:28:34.514676 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-02-16 04:28:34.514688 | orchestrator | Monday 16 February 2026 04:28:30 +0000 (0:00:02.399) 0:00:42.124 ******* 2026-02-16 04:28:34.514699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:34.514719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:34.514738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:42.610424 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:42.610524 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:42.610545 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:42.610610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-16 04:28:42.610619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-16 04:28:42.610653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-16 04:28:42.610665 | orchestrator | 2026-02-16 04:28:42.610677 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-02-16 04:28:42.610704 | orchestrator | Monday 16 February 2026 04:28:34 +0000 (0:00:04.343) 0:00:46.467 ******* 2026-02-16 04:28:42.610715 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 04:28:42.610726 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-16 04:28:42.610735 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-16 04:28:42.610743 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-16 04:28:42.610753 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-16 04:28:42.610762 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-16 04:28:42.610772 | orchestrator | 2026-02-16 04:28:42.610782 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-02-16 04:28:42.610792 | orchestrator | Monday 16 February 2026 04:28:35 +0000 (0:00:01.268) 0:00:47.736 ******* 2026-02-16 04:28:42.610803 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:28:42.610813 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:28:42.610822 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:28:42.610832 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:28:42.610850 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:28:42.610860 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:28:42.610871 | orchestrator | 2026-02-16 04:28:42.610881 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-02-16 04:28:42.610893 | orchestrator | Monday 16 February 2026 04:28:36 +0000 (0:00:00.514) 0:00:48.251 ******* 2026-02-16 04:28:42.610900 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:28:42.610906 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:28:42.610912 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:28:42.610919 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:28:42.610925 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:28:42.610931 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:28:42.610937 | orchestrator | 2026-02-16 04:28:42.610971 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-02-16 04:28:42.610979 | orchestrator | Monday 16 February 2026 04:28:37 +0000 (0:00:01.400) 0:00:49.652 ******* 2026-02-16 04:28:42.610986 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:28:42.610993 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:28:42.611000 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:28:42.611007 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:28:42.611014 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:28:42.611021 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:28:42.611028 | orchestrator | 2026-02-16 04:28:42.611035 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-02-16 04:28:42.611042 | orchestrator | Monday 16 February 2026 04:28:39 +0000 (0:00:01.414) 0:00:51.067 ******* 2026-02-16 04:28:42.611049 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 04:28:42.611056 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-16 04:28:42.611062 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-16 04:28:42.611069 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-16 04:28:42.611076 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-16 04:28:42.611083 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-16 04:28:42.611089 | orchestrator | 2026-02-16 04:28:42.611097 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-02-16 04:28:42.611104 | orchestrator | Monday 16 February 2026 04:28:40 +0000 (0:00:01.264) 0:00:52.331 ******* 2026-02-16 04:28:42.611112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:42.611121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:42.611129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:42.611148 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:43.442998 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:43.443093 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-16 04:28:43.443107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-16 04:28:43.443119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-16 04:28:43.443129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-16 04:28:43.443139 | orchestrator | 2026-02-16 04:28:43.443150 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-02-16 04:28:43.443160 | orchestrator | Monday 16 February 2026 04:28:42 +0000 (0:00:02.229) 0:00:54.560 ******* 2026-02-16 04:28:43.443185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:43.443231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:43.443242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:43.443252 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:28:43.443262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:43.443271 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:28:43.443280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:43.443289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:43.443298 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:28:43.443308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:43.443330 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:28:43.443351 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:46.842532 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:28:46.842639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:46.842659 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:28:46.842671 | orchestrator | 2026-02-16 04:28:46.842683 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-02-16 04:28:46.842695 | orchestrator | Monday 16 February 2026 04:28:43 +0000 (0:00:00.838) 0:00:55.399 ******* 2026-02-16 04:28:46.842706 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:28:46.842717 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:28:46.842727 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:28:46.842738 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:28:46.842749 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:28:46.842759 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:28:46.842770 | orchestrator | 2026-02-16 04:28:46.842781 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-02-16 04:28:46.842792 | orchestrator | Monday 16 February 2026 04:28:44 +0000 (0:00:00.766) 0:00:56.166 ******* 2026-02-16 04:28:46.842805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:46.842818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:46.842853 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:28:46.842865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:46.842891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:46.842922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-16 04:28:46.842935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 04:28:46.842995 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:28:46.843008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:46.843019 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:28:46.843038 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:28:46.843049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:46.843061 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:28:46.843074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-16 04:28:46.843093 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:28:46.843106 | orchestrator | 2026-02-16 04:28:46.843118 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-02-16 04:28:46.843130 | orchestrator | Monday 16 February 2026 04:28:45 +0000 (0:00:00.940) 0:00:57.106 ******* 2026-02-16 04:28:46.843151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-16 04:29:16.974589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-16 04:29:16.974688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-16 04:29:16.974703 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-16 04:29:16.974738 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-16 04:29:16.974763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-16 04:29:16.974775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-16 04:29:16.974802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-16 04:29:16.974813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-16 04:29:16.974824 | orchestrator | 2026-02-16 04:29:16.974843 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-16 04:29:16.974855 | orchestrator | Monday 16 February 2026 04:28:46 +0000 (0:00:01.689) 0:00:58.796 ******* 2026-02-16 04:29:16.974865 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:29:16.974876 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:29:16.974885 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:29:16.974895 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:29:16.974904 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:29:16.974914 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:29:16.974923 | orchestrator | 2026-02-16 04:29:16.974934 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-02-16 04:29:16.974943 | orchestrator | Monday 16 February 2026 04:28:47 +0000 (0:00:00.588) 0:00:59.385 ******* 2026-02-16 04:29:16.974953 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:29:16.974962 | orchestrator | 2026-02-16 04:29:16.975051 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-16 04:29:16.975064 | orchestrator | Monday 16 February 2026 04:28:51 +0000 (0:00:04.331) 0:01:03.717 ******* 2026-02-16 04:29:16.975074 | orchestrator | 2026-02-16 04:29:16.975083 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-16 04:29:16.975093 | orchestrator | Monday 16 February 2026 04:28:51 +0000 (0:00:00.072) 0:01:03.790 ******* 2026-02-16 04:29:16.975102 | orchestrator | 2026-02-16 04:29:16.975112 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-16 04:29:16.975121 | orchestrator | Monday 16 February 2026 04:28:51 +0000 (0:00:00.071) 0:01:03.861 ******* 2026-02-16 04:29:16.975133 | orchestrator | 2026-02-16 04:29:16.975144 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-16 04:29:16.975154 | orchestrator | Monday 16 February 2026 04:28:52 +0000 (0:00:00.242) 0:01:04.104 ******* 2026-02-16 04:29:16.975166 | orchestrator | 2026-02-16 04:29:16.975177 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-16 04:29:16.975188 | orchestrator | Monday 16 February 2026 04:28:52 +0000 (0:00:00.072) 0:01:04.177 ******* 2026-02-16 04:29:16.975198 | orchestrator | 2026-02-16 04:29:16.975209 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-16 04:29:16.975220 | orchestrator | Monday 16 February 2026 04:28:52 +0000 (0:00:00.068) 0:01:04.245 ******* 2026-02-16 04:29:16.975231 | orchestrator | 2026-02-16 04:29:16.975242 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-02-16 04:29:16.975253 | orchestrator | Monday 16 February 2026 04:28:52 +0000 (0:00:00.073) 0:01:04.318 ******* 2026-02-16 04:29:16.975264 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:29:16.975275 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:29:16.975287 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:29:16.975297 | orchestrator | 2026-02-16 04:29:16.975307 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-02-16 04:29:16.975317 | orchestrator | Monday 16 February 2026 04:29:02 +0000 (0:00:10.518) 0:01:14.837 ******* 2026-02-16 04:29:16.975326 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:29:16.975342 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:29:16.975351 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:29:16.975361 | orchestrator | 2026-02-16 04:29:16.975371 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-02-16 04:29:16.975380 | orchestrator | Monday 16 February 2026 04:29:10 +0000 (0:00:07.605) 0:01:22.442 ******* 2026-02-16 04:29:16.975390 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:29:16.975399 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:29:16.975409 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:29:16.975418 | orchestrator | 2026-02-16 04:29:16.975428 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:29:16.975439 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-16 04:29:16.975458 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-16 04:29:16.975475 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-16 04:29:17.429009 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-16 04:29:17.429102 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-16 04:29:17.429114 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-16 04:29:17.429122 | orchestrator | 2026-02-16 04:29:17.429131 | orchestrator | 2026-02-16 04:29:17.429138 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:29:17.429147 | orchestrator | Monday 16 February 2026 04:29:16 +0000 (0:00:06.481) 0:01:28.923 ******* 2026-02-16 04:29:17.429154 | orchestrator | =============================================================================== 2026-02-16 04:29:17.429171 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 10.52s 2026-02-16 04:29:17.429178 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 7.61s 2026-02-16 04:29:17.429185 | orchestrator | ceilometer : Restart ceilometer-compute container ----------------------- 6.48s 2026-02-16 04:29:17.429192 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 4.34s 2026-02-16 04:29:17.429206 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.33s 2026-02-16 04:29:17.429214 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 4.09s 2026-02-16 04:29:17.429221 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.80s 2026-02-16 04:29:17.429228 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.60s 2026-02-16 04:29:17.429236 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.14s 2026-02-16 04:29:17.429242 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.40s 2026-02-16 04:29:17.429248 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.23s 2026-02-16 04:29:17.429255 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.11s 2026-02-16 04:29:17.429262 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.69s 2026-02-16 04:29:17.429269 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.66s 2026-02-16 04:29:17.429276 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.52s 2026-02-16 04:29:17.429284 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.51s 2026-02-16 04:29:17.429290 | orchestrator | ceilometer : Copying over event_pipeline.yaml --------------------------- 1.41s 2026-02-16 04:29:17.429296 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.40s 2026-02-16 04:29:17.429304 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.38s 2026-02-16 04:29:17.429310 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.27s 2026-02-16 04:29:19.757364 | orchestrator | 2026-02-16 04:29:19 | INFO  | Task c34f889f-75d4-4c53-b7b3-d86cff6e457c (aodh) was prepared for execution. 2026-02-16 04:29:19.757433 | orchestrator | 2026-02-16 04:29:19 | INFO  | It takes a moment until task c34f889f-75d4-4c53-b7b3-d86cff6e457c (aodh) has been started and output is visible here. 2026-02-16 04:29:51.859892 | orchestrator | 2026-02-16 04:29:51.860065 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 04:29:51.860083 | orchestrator | 2026-02-16 04:29:51.860118 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 04:29:51.860129 | orchestrator | Monday 16 February 2026 04:29:23 +0000 (0:00:00.265) 0:00:00.265 ******* 2026-02-16 04:29:51.860139 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:29:51.860149 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:29:51.860159 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:29:51.860168 | orchestrator | 2026-02-16 04:29:51.860178 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 04:29:51.860188 | orchestrator | Monday 16 February 2026 04:29:24 +0000 (0:00:00.299) 0:00:00.564 ******* 2026-02-16 04:29:51.860219 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-02-16 04:29:51.860242 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-02-16 04:29:51.860266 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-02-16 04:29:51.860281 | orchestrator | 2026-02-16 04:29:51.860297 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-02-16 04:29:51.860312 | orchestrator | 2026-02-16 04:29:51.860328 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-16 04:29:51.860343 | orchestrator | Monday 16 February 2026 04:29:24 +0000 (0:00:00.459) 0:00:01.024 ******* 2026-02-16 04:29:51.860357 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:29:51.860372 | orchestrator | 2026-02-16 04:29:51.860385 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-02-16 04:29:51.860399 | orchestrator | Monday 16 February 2026 04:29:25 +0000 (0:00:00.578) 0:00:01.603 ******* 2026-02-16 04:29:51.860413 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-02-16 04:29:51.860428 | orchestrator | 2026-02-16 04:29:51.860444 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-02-16 04:29:51.860460 | orchestrator | Monday 16 February 2026 04:29:28 +0000 (0:00:03.550) 0:00:05.153 ******* 2026-02-16 04:29:51.860476 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-02-16 04:29:51.860493 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-02-16 04:29:51.860510 | orchestrator | 2026-02-16 04:29:51.860526 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-02-16 04:29:51.860541 | orchestrator | Monday 16 February 2026 04:29:35 +0000 (0:00:06.557) 0:00:11.711 ******* 2026-02-16 04:29:51.860555 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-16 04:29:51.860571 | orchestrator | 2026-02-16 04:29:51.860585 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-02-16 04:29:51.860601 | orchestrator | Monday 16 February 2026 04:29:38 +0000 (0:00:03.440) 0:00:15.151 ******* 2026-02-16 04:29:51.860618 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-16 04:29:51.860633 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-02-16 04:29:51.860648 | orchestrator | 2026-02-16 04:29:51.860663 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-02-16 04:29:51.860679 | orchestrator | Monday 16 February 2026 04:29:42 +0000 (0:00:03.795) 0:00:18.947 ******* 2026-02-16 04:29:51.860693 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-16 04:29:51.860710 | orchestrator | 2026-02-16 04:29:51.860726 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-02-16 04:29:51.860743 | orchestrator | Monday 16 February 2026 04:29:45 +0000 (0:00:03.312) 0:00:22.260 ******* 2026-02-16 04:29:51.860759 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-02-16 04:29:51.860775 | orchestrator | 2026-02-16 04:29:51.860791 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-02-16 04:29:51.860807 | orchestrator | Monday 16 February 2026 04:29:49 +0000 (0:00:03.749) 0:00:26.010 ******* 2026-02-16 04:29:51.860829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-16 04:29:51.860896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-16 04:29:51.860933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-16 04:29:51.860952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-16 04:29:51.860971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-16 04:29:51.860988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-16 04:29:51.861045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:29:51.861066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:29:53.092203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:29:53.092321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-16 04:29:53.092348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-16 04:29:53.092372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-16 04:29:53.092394 | orchestrator | 2026-02-16 04:29:53.092444 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-02-16 04:29:53.092478 | orchestrator | Monday 16 February 2026 04:29:51 +0000 (0:00:02.188) 0:00:28.198 ******* 2026-02-16 04:29:53.092490 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:29:53.092502 | orchestrator | 2026-02-16 04:29:53.092514 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-02-16 04:29:53.092525 | orchestrator | Monday 16 February 2026 04:29:51 +0000 (0:00:00.125) 0:00:28.323 ******* 2026-02-16 04:29:53.092536 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:29:53.092547 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:29:53.092558 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:29:53.092571 | orchestrator | 2026-02-16 04:29:53.092591 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-02-16 04:29:53.092609 | orchestrator | Monday 16 February 2026 04:29:52 +0000 (0:00:00.501) 0:00:28.825 ******* 2026-02-16 04:29:53.092628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-16 04:29:53.092675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 04:29:53.092708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 04:29:53.092731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 04:29:53.092751 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:29:53.092771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-16 04:29:53.092807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 04:29:53.092827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 04:29:53.092857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 04:29:58.074586 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:29:58.074744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-16 04:29:58.074776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 04:29:58.074797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 04:29:58.074844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 04:29:58.074864 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:29:58.074882 | orchestrator | 2026-02-16 04:29:58.074901 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-16 04:29:58.074919 | orchestrator | Monday 16 February 2026 04:29:53 +0000 (0:00:00.615) 0:00:29.440 ******* 2026-02-16 04:29:58.074936 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:29:58.074954 | orchestrator | 2026-02-16 04:29:58.074972 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-02-16 04:29:58.074988 | orchestrator | Monday 16 February 2026 04:29:53 +0000 (0:00:00.718) 0:00:30.159 ******* 2026-02-16 04:29:58.075067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-16 04:29:58.075110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-16 04:29:58.075123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-16 04:29:58.075145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-16 04:29:58.075158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-16 04:29:58.075172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-16 04:29:58.075190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:29:58.075233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:29:58.743211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:29:58.743337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-16 04:29:58.743353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-16 04:29:58.743366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-16 04:29:58.743378 | orchestrator | 2026-02-16 04:29:58.743393 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-02-16 04:29:58.743406 | orchestrator | Monday 16 February 2026 04:29:58 +0000 (0:00:04.258) 0:00:34.418 ******* 2026-02-16 04:29:58.743419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-16 04:29:58.743447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 04:29:58.743479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 04:29:58.743500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 04:29:58.743513 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:29:58.743526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-16 04:29:58.743538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 04:29:58.743550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 04:29:58.743563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 04:29:58.743574 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:29:58.743600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-16 04:29:59.723465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 04:29:59.723579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 04:29:59.723592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 04:29:59.723603 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:29:59.723615 | orchestrator | 2026-02-16 04:29:59.723625 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-02-16 04:29:59.723635 | orchestrator | Monday 16 February 2026 04:29:58 +0000 (0:00:00.673) 0:00:35.091 ******* 2026-02-16 04:29:59.723645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-16 04:29:59.723670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 04:29:59.723698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 04:29:59.723723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 04:29:59.723732 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:29:59.723742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-16 04:29:59.723751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 04:29:59.723760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 04:29:59.723769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 04:29:59.723796 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:29:59.723818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-16 04:30:03.814704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 04:30:03.814783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 04:30:03.814790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 04:30:03.814796 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:30:03.814801 | orchestrator | 2026-02-16 04:30:03.814807 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-02-16 04:30:03.814813 | orchestrator | Monday 16 February 2026 04:29:59 +0000 (0:00:00.981) 0:00:36.073 ******* 2026-02-16 04:30:03.814819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-16 04:30:03.814851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-16 04:30:03.814870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-16 04:30:03.814878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-16 04:30:03.814886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-16 04:30:03.814893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-16 04:30:03.814901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:30:03.814918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:30:03.814925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:30:03.814938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-16 04:30:12.215896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-16 04:30:12.215991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-16 04:30:12.216003 | orchestrator | 2026-02-16 04:30:12.216056 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-02-16 04:30:12.216072 | orchestrator | Monday 16 February 2026 04:30:03 +0000 (0:00:04.083) 0:00:40.157 ******* 2026-02-16 04:30:12.216087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-16 04:30:12.216132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-16 04:30:12.216142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-16 04:30:12.216164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-16 04:30:12.216173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-16 04:30:12.216180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-16 04:30:12.216196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:30:12.216208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:30:12.216216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:30:12.216224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-16 04:30:12.216237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-16 04:30:17.316516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-16 04:30:17.316617 | orchestrator | 2026-02-16 04:30:17.316628 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-02-16 04:30:17.316637 | orchestrator | Monday 16 February 2026 04:30:12 +0000 (0:00:08.399) 0:00:48.556 ******* 2026-02-16 04:30:17.316644 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:30:17.316652 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:30:17.316678 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:30:17.316684 | orchestrator | 2026-02-16 04:30:17.316704 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-02-16 04:30:17.316711 | orchestrator | Monday 16 February 2026 04:30:13 +0000 (0:00:01.794) 0:00:50.350 ******* 2026-02-16 04:30:17.316719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-16 04:30:17.316740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-16 04:30:17.316747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-16 04:30:17.316768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-16 04:30:17.316785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-16 04:30:17.316797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-16 04:30:17.316803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:30:17.316814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:30:17.316821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-16 04:30:17.316829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-16 04:30:17.316843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-16 04:31:01.675515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-16 04:31:01.675695 | orchestrator | 2026-02-16 04:31:01.675718 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-16 04:31:01.675733 | orchestrator | Monday 16 February 2026 04:30:17 +0000 (0:00:03.312) 0:00:53.663 ******* 2026-02-16 04:31:01.675777 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:31:01.675793 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:31:01.675806 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:31:01.675820 | orchestrator | 2026-02-16 04:31:01.675834 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-02-16 04:31:01.675848 | orchestrator | Monday 16 February 2026 04:30:17 +0000 (0:00:00.337) 0:00:54.001 ******* 2026-02-16 04:31:01.675862 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:31:01.675875 | orchestrator | 2026-02-16 04:31:01.675889 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-02-16 04:31:01.675897 | orchestrator | Monday 16 February 2026 04:30:19 +0000 (0:00:02.071) 0:00:56.072 ******* 2026-02-16 04:31:01.675905 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:31:01.675913 | orchestrator | 2026-02-16 04:31:01.675921 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-02-16 04:31:01.675929 | orchestrator | Monday 16 February 2026 04:30:21 +0000 (0:00:02.233) 0:00:58.306 ******* 2026-02-16 04:31:01.675937 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:31:01.675945 | orchestrator | 2026-02-16 04:31:01.675953 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-16 04:31:01.675961 | orchestrator | Monday 16 February 2026 04:30:35 +0000 (0:00:13.184) 0:01:11.491 ******* 2026-02-16 04:31:01.675969 | orchestrator | 2026-02-16 04:31:01.675977 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-16 04:31:01.675984 | orchestrator | Monday 16 February 2026 04:30:35 +0000 (0:00:00.071) 0:01:11.562 ******* 2026-02-16 04:31:01.675992 | orchestrator | 2026-02-16 04:31:01.676000 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-16 04:31:01.676008 | orchestrator | Monday 16 February 2026 04:30:35 +0000 (0:00:00.069) 0:01:11.632 ******* 2026-02-16 04:31:01.676016 | orchestrator | 2026-02-16 04:31:01.676023 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-02-16 04:31:01.676031 | orchestrator | Monday 16 February 2026 04:30:35 +0000 (0:00:00.266) 0:01:11.898 ******* 2026-02-16 04:31:01.676110 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:31:01.676123 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:31:01.676132 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:31:01.676141 | orchestrator | 2026-02-16 04:31:01.676150 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-02-16 04:31:01.676159 | orchestrator | Monday 16 February 2026 04:30:46 +0000 (0:00:10.573) 0:01:22.472 ******* 2026-02-16 04:31:01.676168 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:31:01.676177 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:31:01.676186 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:31:01.676195 | orchestrator | 2026-02-16 04:31:01.676204 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-02-16 04:31:01.676213 | orchestrator | Monday 16 February 2026 04:30:51 +0000 (0:00:04.981) 0:01:27.453 ******* 2026-02-16 04:31:01.676222 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:31:01.676231 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:31:01.676240 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:31:01.676248 | orchestrator | 2026-02-16 04:31:01.676258 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-02-16 04:31:01.676267 | orchestrator | Monday 16 February 2026 04:30:56 +0000 (0:00:04.941) 0:01:32.394 ******* 2026-02-16 04:31:01.676294 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:31:01.676307 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:31:01.676320 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:31:01.676335 | orchestrator | 2026-02-16 04:31:01.676350 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:31:01.676365 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 04:31:01.676380 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-16 04:31:01.676388 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-16 04:31:01.676396 | orchestrator | 2026-02-16 04:31:01.676404 | orchestrator | 2026-02-16 04:31:01.676412 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:31:01.676420 | orchestrator | Monday 16 February 2026 04:31:01 +0000 (0:00:05.284) 0:01:37.679 ******* 2026-02-16 04:31:01.676428 | orchestrator | =============================================================================== 2026-02-16 04:31:01.676436 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 13.18s 2026-02-16 04:31:01.676444 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 10.57s 2026-02-16 04:31:01.676469 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.40s 2026-02-16 04:31:01.676477 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.56s 2026-02-16 04:31:01.676485 | orchestrator | aodh : Restart aodh-notifier container ---------------------------------- 5.28s 2026-02-16 04:31:01.676493 | orchestrator | aodh : Restart aodh-evaluator container --------------------------------- 4.98s 2026-02-16 04:31:01.676501 | orchestrator | aodh : Restart aodh-listener container ---------------------------------- 4.94s 2026-02-16 04:31:01.676509 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.26s 2026-02-16 04:31:01.676516 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.08s 2026-02-16 04:31:01.676524 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 3.80s 2026-02-16 04:31:01.676532 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.75s 2026-02-16 04:31:01.676540 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.55s 2026-02-16 04:31:01.676548 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.44s 2026-02-16 04:31:01.676556 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.31s 2026-02-16 04:31:01.676563 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.31s 2026-02-16 04:31:01.676571 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.23s 2026-02-16 04:31:01.676579 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.19s 2026-02-16 04:31:01.676587 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.07s 2026-02-16 04:31:01.676595 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.79s 2026-02-16 04:31:01.676603 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 0.98s 2026-02-16 04:31:03.984039 | orchestrator | 2026-02-16 04:31:03 | INFO  | Task f0b504ba-3b99-4e2b-b1f7-d0f10db8705c (kolla-ceph-rgw) was prepared for execution. 2026-02-16 04:31:03.984195 | orchestrator | 2026-02-16 04:31:03 | INFO  | It takes a moment until task f0b504ba-3b99-4e2b-b1f7-d0f10db8705c (kolla-ceph-rgw) has been started and output is visible here. 2026-02-16 04:31:39.002276 | orchestrator | 2026-02-16 04:31:39.002353 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 04:31:39.002360 | orchestrator | 2026-02-16 04:31:39.002365 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 04:31:39.002384 | orchestrator | Monday 16 February 2026 04:31:08 +0000 (0:00:00.287) 0:00:00.287 ******* 2026-02-16 04:31:39.002389 | orchestrator | ok: [testbed-manager] 2026-02-16 04:31:39.002394 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:31:39.002398 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:31:39.002401 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:31:39.002405 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:31:39.002419 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:31:39.002422 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:31:39.002426 | orchestrator | 2026-02-16 04:31:39.002430 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 04:31:39.002434 | orchestrator | Monday 16 February 2026 04:31:08 +0000 (0:00:00.832) 0:00:01.119 ******* 2026-02-16 04:31:39.002438 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-16 04:31:39.002442 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-16 04:31:39.002446 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-16 04:31:39.002450 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-16 04:31:39.002453 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-16 04:31:39.002457 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-16 04:31:39.002461 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-16 04:31:39.002465 | orchestrator | 2026-02-16 04:31:39.002468 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-16 04:31:39.002472 | orchestrator | 2026-02-16 04:31:39.002476 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-16 04:31:39.002480 | orchestrator | Monday 16 February 2026 04:31:09 +0000 (0:00:00.712) 0:00:01.831 ******* 2026-02-16 04:31:39.002484 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 04:31:39.002489 | orchestrator | 2026-02-16 04:31:39.002493 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-16 04:31:39.002497 | orchestrator | Monday 16 February 2026 04:31:11 +0000 (0:00:01.560) 0:00:03.392 ******* 2026-02-16 04:31:39.002501 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-16 04:31:39.002505 | orchestrator | 2026-02-16 04:31:39.002509 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-16 04:31:39.002513 | orchestrator | Monday 16 February 2026 04:31:14 +0000 (0:00:03.729) 0:00:07.121 ******* 2026-02-16 04:31:39.002517 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-16 04:31:39.002522 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-16 04:31:39.002526 | orchestrator | 2026-02-16 04:31:39.002530 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-16 04:31:39.002534 | orchestrator | Monday 16 February 2026 04:31:21 +0000 (0:00:06.068) 0:00:13.190 ******* 2026-02-16 04:31:39.002538 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-16 04:31:39.002541 | orchestrator | 2026-02-16 04:31:39.002545 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-16 04:31:39.002549 | orchestrator | Monday 16 February 2026 04:31:24 +0000 (0:00:03.124) 0:00:16.314 ******* 2026-02-16 04:31:39.002553 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-16 04:31:39.002557 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-16 04:31:39.002560 | orchestrator | 2026-02-16 04:31:39.002564 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-16 04:31:39.002568 | orchestrator | Monday 16 February 2026 04:31:27 +0000 (0:00:03.735) 0:00:20.050 ******* 2026-02-16 04:31:39.002572 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-16 04:31:39.002579 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-16 04:31:39.002583 | orchestrator | 2026-02-16 04:31:39.002587 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-16 04:31:39.002591 | orchestrator | Monday 16 February 2026 04:31:33 +0000 (0:00:05.898) 0:00:25.948 ******* 2026-02-16 04:31:39.002595 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-16 04:31:39.002598 | orchestrator | 2026-02-16 04:31:39.002602 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:31:39.002606 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 04:31:39.002610 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 04:31:39.002614 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 04:31:39.002618 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 04:31:39.002622 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 04:31:39.002635 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 04:31:39.002639 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 04:31:39.002642 | orchestrator | 2026-02-16 04:31:39.002646 | orchestrator | 2026-02-16 04:31:39.002650 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:31:39.002654 | orchestrator | Monday 16 February 2026 04:31:38 +0000 (0:00:04.722) 0:00:30.671 ******* 2026-02-16 04:31:39.002658 | orchestrator | =============================================================================== 2026-02-16 04:31:39.002664 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.07s 2026-02-16 04:31:39.002668 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.90s 2026-02-16 04:31:39.002671 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.72s 2026-02-16 04:31:39.002675 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.74s 2026-02-16 04:31:39.002679 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.73s 2026-02-16 04:31:39.002683 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.12s 2026-02-16 04:31:39.002686 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.56s 2026-02-16 04:31:39.002690 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.83s 2026-02-16 04:31:39.002694 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2026-02-16 04:31:41.328619 | orchestrator | 2026-02-16 04:31:41 | INFO  | Task 874a9530-6dd2-4c57-8f17-17b357c7ff86 (gnocchi) was prepared for execution. 2026-02-16 04:31:41.328719 | orchestrator | 2026-02-16 04:31:41 | INFO  | It takes a moment until task 874a9530-6dd2-4c57-8f17-17b357c7ff86 (gnocchi) has been started and output is visible here. 2026-02-16 04:31:46.556709 | orchestrator | 2026-02-16 04:31:46.556821 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 04:31:46.556838 | orchestrator | 2026-02-16 04:31:46.556850 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 04:31:46.556860 | orchestrator | Monday 16 February 2026 04:31:45 +0000 (0:00:00.260) 0:00:00.260 ******* 2026-02-16 04:31:46.556872 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:31:46.556883 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:31:46.556920 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:31:46.556931 | orchestrator | 2026-02-16 04:31:46.556941 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 04:31:46.556952 | orchestrator | Monday 16 February 2026 04:31:45 +0000 (0:00:00.311) 0:00:00.572 ******* 2026-02-16 04:31:46.556966 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-02-16 04:31:46.556977 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-02-16 04:31:46.556989 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-02-16 04:31:46.556999 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-02-16 04:31:46.557007 | orchestrator | 2026-02-16 04:31:46.557017 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-02-16 04:31:46.557026 | orchestrator | skipping: no hosts matched 2026-02-16 04:31:46.557038 | orchestrator | 2026-02-16 04:31:46.557048 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:31:46.557059 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 04:31:46.557146 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 04:31:46.557156 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 04:31:46.557166 | orchestrator | 2026-02-16 04:31:46.557176 | orchestrator | 2026-02-16 04:31:46.557187 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:31:46.557198 | orchestrator | Monday 16 February 2026 04:31:46 +0000 (0:00:00.377) 0:00:00.949 ******* 2026-02-16 04:31:46.557208 | orchestrator | =============================================================================== 2026-02-16 04:31:46.557218 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.38s 2026-02-16 04:31:46.557229 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-02-16 04:31:48.947761 | orchestrator | 2026-02-16 04:31:48 | INFO  | Task 4111b06b-8537-4863-9ed2-6460e49cebbe (manila) was prepared for execution. 2026-02-16 04:31:48.947867 | orchestrator | 2026-02-16 04:31:48 | INFO  | It takes a moment until task 4111b06b-8537-4863-9ed2-6460e49cebbe (manila) has been started and output is visible here. 2026-02-16 04:32:30.505486 | orchestrator | 2026-02-16 04:32:30.505626 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 04:32:30.505649 | orchestrator | 2026-02-16 04:32:30.505669 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 04:32:30.505687 | orchestrator | Monday 16 February 2026 04:31:52 +0000 (0:00:00.233) 0:00:00.233 ******* 2026-02-16 04:32:30.505704 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:32:30.505722 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:32:30.505740 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:32:30.505756 | orchestrator | 2026-02-16 04:32:30.505774 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 04:32:30.505793 | orchestrator | Monday 16 February 2026 04:31:53 +0000 (0:00:00.290) 0:00:00.524 ******* 2026-02-16 04:32:30.505811 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-02-16 04:32:30.505829 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-02-16 04:32:30.505846 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-02-16 04:32:30.505863 | orchestrator | 2026-02-16 04:32:30.505880 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-02-16 04:32:30.505896 | orchestrator | 2026-02-16 04:32:30.505914 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-16 04:32:30.505931 | orchestrator | Monday 16 February 2026 04:31:53 +0000 (0:00:00.380) 0:00:00.904 ******* 2026-02-16 04:32:30.505967 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:32:30.506075 | orchestrator | 2026-02-16 04:32:30.506126 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-16 04:32:30.506145 | orchestrator | Monday 16 February 2026 04:31:54 +0000 (0:00:00.490) 0:00:01.395 ******* 2026-02-16 04:32:30.506163 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:32:30.506181 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:32:30.506199 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:32:30.506216 | orchestrator | 2026-02-16 04:32:30.506234 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-02-16 04:32:30.506250 | orchestrator | Monday 16 February 2026 04:31:54 +0000 (0:00:00.368) 0:00:01.763 ******* 2026-02-16 04:32:30.506268 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-02-16 04:32:30.506284 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-02-16 04:32:30.506301 | orchestrator | 2026-02-16 04:32:30.506318 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-02-16 04:32:30.506335 | orchestrator | Monday 16 February 2026 04:32:00 +0000 (0:00:06.523) 0:00:08.286 ******* 2026-02-16 04:32:30.506352 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-02-16 04:32:30.506370 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-02-16 04:32:30.506386 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-02-16 04:32:30.506403 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-02-16 04:32:30.506420 | orchestrator | 2026-02-16 04:32:30.506436 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-02-16 04:32:30.506453 | orchestrator | Monday 16 February 2026 04:32:13 +0000 (0:00:12.977) 0:00:21.264 ******* 2026-02-16 04:32:30.506471 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-16 04:32:30.506490 | orchestrator | 2026-02-16 04:32:30.506508 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-02-16 04:32:30.506527 | orchestrator | Monday 16 February 2026 04:32:17 +0000 (0:00:03.178) 0:00:24.442 ******* 2026-02-16 04:32:30.506545 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-16 04:32:30.506563 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-02-16 04:32:30.506577 | orchestrator | 2026-02-16 04:32:30.506588 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-02-16 04:32:30.506598 | orchestrator | Monday 16 February 2026 04:32:21 +0000 (0:00:04.037) 0:00:28.480 ******* 2026-02-16 04:32:30.506609 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-16 04:32:30.506619 | orchestrator | 2026-02-16 04:32:30.506630 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-02-16 04:32:30.506641 | orchestrator | Monday 16 February 2026 04:32:24 +0000 (0:00:03.219) 0:00:31.699 ******* 2026-02-16 04:32:30.506651 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-02-16 04:32:30.506662 | orchestrator | 2026-02-16 04:32:30.506673 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-02-16 04:32:30.506683 | orchestrator | Monday 16 February 2026 04:32:28 +0000 (0:00:03.959) 0:00:35.659 ******* 2026-02-16 04:32:30.506719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-16 04:32:30.506753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-16 04:32:30.506766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-16 04:32:30.506778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:30.506795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:30.506814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:30.506845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:40.933909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:40.934187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:40.934225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:40.934246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:40.934267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:40.934288 | orchestrator | 2026-02-16 04:32:40.934311 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-16 04:32:40.934333 | orchestrator | Monday 16 February 2026 04:32:30 +0000 (0:00:02.213) 0:00:37.873 ******* 2026-02-16 04:32:40.934506 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:32:40.934525 | orchestrator | 2026-02-16 04:32:40.934538 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-02-16 04:32:40.934551 | orchestrator | Monday 16 February 2026 04:32:31 +0000 (0:00:00.550) 0:00:38.423 ******* 2026-02-16 04:32:40.934563 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:32:40.934575 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:32:40.934586 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:32:40.934597 | orchestrator | 2026-02-16 04:32:40.934608 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-02-16 04:32:40.934619 | orchestrator | Monday 16 February 2026 04:32:32 +0000 (0:00:00.944) 0:00:39.368 ******* 2026-02-16 04:32:40.934631 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-16 04:32:40.934665 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-16 04:32:40.934677 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-16 04:32:40.934688 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-16 04:32:40.934712 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-16 04:32:40.934731 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-16 04:32:40.934748 | orchestrator | 2026-02-16 04:32:40.934767 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-02-16 04:32:40.934787 | orchestrator | Monday 16 February 2026 04:32:33 +0000 (0:00:01.775) 0:00:41.143 ******* 2026-02-16 04:32:40.934805 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-16 04:32:40.934825 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-16 04:32:40.934837 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-16 04:32:40.934848 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-16 04:32:40.934859 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-16 04:32:40.934869 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-16 04:32:40.934880 | orchestrator | 2026-02-16 04:32:40.934891 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-02-16 04:32:40.934902 | orchestrator | Monday 16 February 2026 04:32:35 +0000 (0:00:01.218) 0:00:42.361 ******* 2026-02-16 04:32:40.934913 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-02-16 04:32:40.934925 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-02-16 04:32:40.934965 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-02-16 04:32:40.934977 | orchestrator | 2026-02-16 04:32:40.934987 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-02-16 04:32:40.934998 | orchestrator | Monday 16 February 2026 04:32:35 +0000 (0:00:00.680) 0:00:43.042 ******* 2026-02-16 04:32:40.935009 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:32:40.935020 | orchestrator | 2026-02-16 04:32:40.935030 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-02-16 04:32:40.935041 | orchestrator | Monday 16 February 2026 04:32:35 +0000 (0:00:00.145) 0:00:43.187 ******* 2026-02-16 04:32:40.935052 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:32:40.935063 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:32:40.935074 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:32:40.935084 | orchestrator | 2026-02-16 04:32:40.935169 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-16 04:32:40.935180 | orchestrator | Monday 16 February 2026 04:32:36 +0000 (0:00:00.461) 0:00:43.649 ******* 2026-02-16 04:32:40.935192 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:32:40.935203 | orchestrator | 2026-02-16 04:32:40.935213 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-02-16 04:32:40.935224 | orchestrator | Monday 16 February 2026 04:32:36 +0000 (0:00:00.568) 0:00:44.218 ******* 2026-02-16 04:32:40.935247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-16 04:32:41.854886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-16 04:32:41.854988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-16 04:32:41.855022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:41.855033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:41.855043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:41.855069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:41.855156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:41.855169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:41.855178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:41.855195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:41.855204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:41.855214 | orchestrator | 2026-02-16 04:32:41.855224 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-02-16 04:32:41.855235 | orchestrator | Monday 16 February 2026 04:32:41 +0000 (0:00:04.086) 0:00:48.305 ******* 2026-02-16 04:32:41.855252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-16 04:32:42.471840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:32:42.471946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 04:32:42.471978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 04:32:42.471990 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:32:42.472001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-16 04:32:42.472012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:32:42.472021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 04:32:42.472051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 04:32:42.472061 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:32:42.472071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-16 04:32:42.472118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:32:42.472129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 04:32:42.472138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 04:32:42.472147 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:32:42.472156 | orchestrator | 2026-02-16 04:32:42.472166 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-02-16 04:32:42.472177 | orchestrator | Monday 16 February 2026 04:32:41 +0000 (0:00:00.914) 0:00:49.220 ******* 2026-02-16 04:32:42.472198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-16 04:32:46.949216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:32:46.949312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 04:32:46.949319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 04:32:46.949324 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:32:46.949330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-16 04:32:46.949336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:32:46.949350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 04:32:46.949369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 04:32:46.949373 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:32:46.949377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-16 04:32:46.949382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:32:46.949386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 04:32:46.949390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 04:32:46.949393 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:32:46.949397 | orchestrator | 2026-02-16 04:32:46.949402 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-02-16 04:32:46.949408 | orchestrator | Monday 16 February 2026 04:32:42 +0000 (0:00:00.858) 0:00:50.078 ******* 2026-02-16 04:32:46.949419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-16 04:32:53.550627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-16 04:32:53.550724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-16 04:32:53.550738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:53.550749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:53.550758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:53.550812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:53.550825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:53.550833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:53.550841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:53.550850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:53.550858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:53.550874 | orchestrator | 2026-02-16 04:32:53.550884 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-02-16 04:32:53.550894 | orchestrator | Monday 16 February 2026 04:32:47 +0000 (0:00:04.435) 0:00:54.513 ******* 2026-02-16 04:32:53.550913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-16 04:32:57.634764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-16 04:32:57.634873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-16 04:32:57.634889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:57.634903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 04:32:57.634955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:57.634987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 04:32:57.635000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:57.635011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 04:32:57.635023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:57.635034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:57.635052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-16 04:32:57.635064 | orchestrator | 2026-02-16 04:32:57.635078 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-02-16 04:32:57.635090 | orchestrator | Monday 16 February 2026 04:32:53 +0000 (0:00:06.414) 0:01:00.928 ******* 2026-02-16 04:32:57.635375 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-02-16 04:32:57.635389 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-02-16 04:32:57.635401 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-02-16 04:32:57.635413 | orchestrator | 2026-02-16 04:32:57.635425 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-02-16 04:32:57.635438 | orchestrator | Monday 16 February 2026 04:32:57 +0000 (0:00:03.467) 0:01:04.396 ******* 2026-02-16 04:32:57.635463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-16 04:33:00.890879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:33:00.891018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 04:33:00.891048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 04:33:00.891169 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:33:00.891197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-16 04:33:00.891227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:33:00.891239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 04:33:00.891271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 04:33:00.891283 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:33:00.891295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-16 04:33:00.891316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 04:33:00.891327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 04:33:00.891344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 04:33:00.891356 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:33:00.891368 | orchestrator | 2026-02-16 04:33:00.891380 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-02-16 04:33:00.891392 | orchestrator | Monday 16 February 2026 04:32:57 +0000 (0:00:00.632) 0:01:05.028 ******* 2026-02-16 04:33:00.891412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-16 04:33:43.352299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-16 04:33:43.352395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-16 04:33:43.352404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:33:43.352421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:33:43.352425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-16 04:33:43.352440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-16 04:33:43.352446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-16 04:33:43.352455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-16 04:33:43.352459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-16 04:33:43.352467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-16 04:33:43.352471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-16 04:33:43.352476 | orchestrator | 2026-02-16 04:33:43.352482 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-02-16 04:33:43.352488 | orchestrator | Monday 16 February 2026 04:33:00 +0000 (0:00:03.237) 0:01:08.266 ******* 2026-02-16 04:33:43.352492 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:33:43.352497 | orchestrator | 2026-02-16 04:33:43.352501 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-02-16 04:33:43.352506 | orchestrator | Monday 16 February 2026 04:33:03 +0000 (0:00:02.240) 0:01:10.506 ******* 2026-02-16 04:33:43.352510 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:33:43.352514 | orchestrator | 2026-02-16 04:33:43.352518 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-02-16 04:33:43.352522 | orchestrator | Monday 16 February 2026 04:33:05 +0000 (0:00:02.289) 0:01:12.796 ******* 2026-02-16 04:33:43.352526 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:33:43.352530 | orchestrator | 2026-02-16 04:33:43.352534 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-16 04:33:43.352539 | orchestrator | Monday 16 February 2026 04:33:43 +0000 (0:00:37.610) 0:01:50.406 ******* 2026-02-16 04:33:43.352547 | orchestrator | 2026-02-16 04:33:43.352554 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-16 04:34:33.991912 | orchestrator | Monday 16 February 2026 04:33:43 +0000 (0:00:00.073) 0:01:50.479 ******* 2026-02-16 04:34:33.992022 | orchestrator | 2026-02-16 04:34:33.992037 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-16 04:34:33.992047 | orchestrator | Monday 16 February 2026 04:33:43 +0000 (0:00:00.072) 0:01:50.552 ******* 2026-02-16 04:34:33.992057 | orchestrator | 2026-02-16 04:34:33.992066 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-02-16 04:34:33.992075 | orchestrator | Monday 16 February 2026 04:33:43 +0000 (0:00:00.071) 0:01:50.623 ******* 2026-02-16 04:34:33.992085 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:34:33.992095 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:34:33.992105 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:34:33.992173 | orchestrator | 2026-02-16 04:34:33.992185 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-02-16 04:34:33.992195 | orchestrator | Monday 16 February 2026 04:33:58 +0000 (0:00:14.999) 0:02:05.622 ******* 2026-02-16 04:34:33.992206 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:34:33.992215 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:34:33.992223 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:34:33.992241 | orchestrator | 2026-02-16 04:34:33.992251 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-02-16 04:34:33.992259 | orchestrator | Monday 16 February 2026 04:34:04 +0000 (0:00:06.167) 0:02:11.789 ******* 2026-02-16 04:34:33.992268 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:34:33.992277 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:34:33.992286 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:34:33.992295 | orchestrator | 2026-02-16 04:34:33.992304 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-02-16 04:34:33.992313 | orchestrator | Monday 16 February 2026 04:34:14 +0000 (0:00:10.394) 0:02:22.184 ******* 2026-02-16 04:34:33.992323 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:34:33.992332 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:34:33.992342 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:34:33.992351 | orchestrator | 2026-02-16 04:34:33.992360 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:34:33.992370 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 04:34:33.992382 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-16 04:34:33.992391 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-16 04:34:33.992400 | orchestrator | 2026-02-16 04:34:33.992408 | orchestrator | 2026-02-16 04:34:33.992417 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:34:33.992426 | orchestrator | Monday 16 February 2026 04:34:33 +0000 (0:00:18.642) 0:02:40.827 ******* 2026-02-16 04:34:33.992435 | orchestrator | =============================================================================== 2026-02-16 04:34:33.992445 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 37.61s 2026-02-16 04:34:33.992454 | orchestrator | manila : Restart manila-share container -------------------------------- 18.64s 2026-02-16 04:34:33.992465 | orchestrator | manila : Restart manila-api container ---------------------------------- 15.00s 2026-02-16 04:34:33.992475 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 12.98s 2026-02-16 04:34:33.992487 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.39s 2026-02-16 04:34:33.992498 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.52s 2026-02-16 04:34:33.992508 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.41s 2026-02-16 04:34:33.992548 | orchestrator | manila : Restart manila-data container ---------------------------------- 6.17s 2026-02-16 04:34:33.992560 | orchestrator | manila : Copying over config.json files for services -------------------- 4.44s 2026-02-16 04:34:33.992569 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.09s 2026-02-16 04:34:33.992580 | orchestrator | service-ks-register : manila | Creating users --------------------------- 4.04s 2026-02-16 04:34:33.992590 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.96s 2026-02-16 04:34:33.992601 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.47s 2026-02-16 04:34:33.992610 | orchestrator | manila : Check manila containers ---------------------------------------- 3.24s 2026-02-16 04:34:33.992620 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.22s 2026-02-16 04:34:33.992630 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.18s 2026-02-16 04:34:33.992639 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.29s 2026-02-16 04:34:33.992648 | orchestrator | manila : Creating Manila database --------------------------------------- 2.24s 2026-02-16 04:34:33.992659 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.21s 2026-02-16 04:34:33.992670 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.78s 2026-02-16 04:34:34.319959 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-02-16 04:34:46.423482 | orchestrator | 2026-02-16 04:34:46 | INFO  | Task e5452b9d-3b7e-44ed-82c7-3c1c18af3cfa (netdata) was prepared for execution. 2026-02-16 04:34:46.423641 | orchestrator | 2026-02-16 04:34:46 | INFO  | It takes a moment until task e5452b9d-3b7e-44ed-82c7-3c1c18af3cfa (netdata) has been started and output is visible here. 2026-02-16 04:36:17.711345 | orchestrator | 2026-02-16 04:36:17.711497 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 04:36:17.711527 | orchestrator | 2026-02-16 04:36:17.711549 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 04:36:17.711568 | orchestrator | Monday 16 February 2026 04:34:50 +0000 (0:00:00.247) 0:00:00.247 ******* 2026-02-16 04:36:17.711589 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-16 04:36:17.711608 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-16 04:36:17.711624 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-16 04:36:17.711643 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-16 04:36:17.711658 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-16 04:36:17.711669 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-16 04:36:17.711680 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-16 04:36:17.711691 | orchestrator | 2026-02-16 04:36:17.711701 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-16 04:36:17.711712 | orchestrator | 2026-02-16 04:36:17.711723 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-16 04:36:17.711734 | orchestrator | Monday 16 February 2026 04:34:51 +0000 (0:00:00.888) 0:00:01.136 ******* 2026-02-16 04:36:17.711748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 04:36:17.711762 | orchestrator | 2026-02-16 04:36:17.711773 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-16 04:36:17.711784 | orchestrator | Monday 16 February 2026 04:34:52 +0000 (0:00:01.318) 0:00:02.455 ******* 2026-02-16 04:36:17.711795 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:36:17.711878 | orchestrator | ok: [testbed-manager] 2026-02-16 04:36:17.711909 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:36:17.711956 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:36:17.711968 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:36:17.711980 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:36:17.711990 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:36:17.712001 | orchestrator | 2026-02-16 04:36:17.712012 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-16 04:36:17.712024 | orchestrator | Monday 16 February 2026 04:34:54 +0000 (0:00:01.805) 0:00:04.261 ******* 2026-02-16 04:36:17.712035 | orchestrator | ok: [testbed-manager] 2026-02-16 04:36:17.712045 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:36:17.712056 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:36:17.712067 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:36:17.712078 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:36:17.712088 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:36:17.712099 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:36:17.712110 | orchestrator | 2026-02-16 04:36:17.712120 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-16 04:36:17.712168 | orchestrator | Monday 16 February 2026 04:34:56 +0000 (0:00:02.083) 0:00:06.344 ******* 2026-02-16 04:36:17.712180 | orchestrator | changed: [testbed-manager] 2026-02-16 04:36:17.712191 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:36:17.712201 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:36:17.712212 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:36:17.712223 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:36:17.712234 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:36:17.712244 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:36:17.712255 | orchestrator | 2026-02-16 04:36:17.712266 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-16 04:36:17.712283 | orchestrator | Monday 16 February 2026 04:34:58 +0000 (0:00:01.446) 0:00:07.790 ******* 2026-02-16 04:36:17.712294 | orchestrator | changed: [testbed-manager] 2026-02-16 04:36:17.712304 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:36:17.712315 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:36:17.712326 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:36:17.712336 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:36:17.712347 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:36:17.712357 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:36:17.712368 | orchestrator | 2026-02-16 04:36:17.712379 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-16 04:36:17.712390 | orchestrator | Monday 16 February 2026 04:35:13 +0000 (0:00:14.797) 0:00:22.588 ******* 2026-02-16 04:36:17.712400 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:36:17.712411 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:36:17.712423 | orchestrator | changed: [testbed-manager] 2026-02-16 04:36:17.712433 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:36:17.712444 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:36:17.712454 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:36:17.712465 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:36:17.712475 | orchestrator | 2026-02-16 04:36:17.712486 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-16 04:36:17.712497 | orchestrator | Monday 16 February 2026 04:35:52 +0000 (0:00:39.086) 0:01:01.674 ******* 2026-02-16 04:36:17.712509 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 04:36:17.712522 | orchestrator | 2026-02-16 04:36:17.712533 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-16 04:36:17.712544 | orchestrator | Monday 16 February 2026 04:35:53 +0000 (0:00:01.536) 0:01:03.211 ******* 2026-02-16 04:36:17.712555 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-16 04:36:17.712566 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-16 04:36:17.712577 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-16 04:36:17.712597 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-16 04:36:17.712629 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-16 04:36:17.712641 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-16 04:36:17.712652 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-16 04:36:17.712662 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-16 04:36:17.712673 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-16 04:36:17.712684 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-16 04:36:17.712694 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-16 04:36:17.712705 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-16 04:36:17.712716 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-16 04:36:17.712726 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-16 04:36:17.712737 | orchestrator | 2026-02-16 04:36:17.712748 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-16 04:36:17.712760 | orchestrator | Monday 16 February 2026 04:35:57 +0000 (0:00:03.345) 0:01:06.556 ******* 2026-02-16 04:36:17.712770 | orchestrator | ok: [testbed-manager] 2026-02-16 04:36:17.712781 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:36:17.712792 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:36:17.712802 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:36:17.712820 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:36:17.712838 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:36:17.712858 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:36:17.712876 | orchestrator | 2026-02-16 04:36:17.712894 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-16 04:36:17.712913 | orchestrator | Monday 16 February 2026 04:35:58 +0000 (0:00:01.238) 0:01:07.794 ******* 2026-02-16 04:36:17.712932 | orchestrator | changed: [testbed-manager] 2026-02-16 04:36:17.712953 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:36:17.712973 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:36:17.712991 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:36:17.713011 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:36:17.713025 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:36:17.713036 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:36:17.713047 | orchestrator | 2026-02-16 04:36:17.713057 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-16 04:36:17.713069 | orchestrator | Monday 16 February 2026 04:35:59 +0000 (0:00:01.313) 0:01:09.107 ******* 2026-02-16 04:36:17.713079 | orchestrator | ok: [testbed-manager] 2026-02-16 04:36:17.713090 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:36:17.713100 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:36:17.713111 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:36:17.713122 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:36:17.713167 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:36:17.713181 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:36:17.713192 | orchestrator | 2026-02-16 04:36:17.713203 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-16 04:36:17.713214 | orchestrator | Monday 16 February 2026 04:36:00 +0000 (0:00:01.206) 0:01:10.314 ******* 2026-02-16 04:36:17.713224 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:36:17.713235 | orchestrator | ok: [testbed-manager] 2026-02-16 04:36:17.713246 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:36:17.713256 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:36:17.713267 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:36:17.713278 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:36:17.713288 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:36:17.713299 | orchestrator | 2026-02-16 04:36:17.713309 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-16 04:36:17.713320 | orchestrator | Monday 16 February 2026 04:36:02 +0000 (0:00:01.609) 0:01:11.923 ******* 2026-02-16 04:36:17.713331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-16 04:36:17.713360 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 04:36:17.713372 | orchestrator | 2026-02-16 04:36:17.713383 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-16 04:36:17.713394 | orchestrator | Monday 16 February 2026 04:36:03 +0000 (0:00:01.356) 0:01:13.280 ******* 2026-02-16 04:36:17.713405 | orchestrator | changed: [testbed-manager] 2026-02-16 04:36:17.713415 | orchestrator | 2026-02-16 04:36:17.713426 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-16 04:36:17.713437 | orchestrator | Monday 16 February 2026 04:36:06 +0000 (0:00:02.342) 0:01:15.622 ******* 2026-02-16 04:36:17.713448 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:36:17.713458 | orchestrator | changed: [testbed-manager] 2026-02-16 04:36:17.713469 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:36:17.713479 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:36:17.713490 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:36:17.713501 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:36:17.713511 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:36:17.713522 | orchestrator | 2026-02-16 04:36:17.713532 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:36:17.713543 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 04:36:17.713555 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 04:36:17.713566 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 04:36:17.713577 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 04:36:17.713597 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 04:36:18.114534 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 04:36:18.114644 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 04:36:18.114665 | orchestrator | 2026-02-16 04:36:18.114680 | orchestrator | 2026-02-16 04:36:18.114696 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:36:18.114713 | orchestrator | Monday 16 February 2026 04:36:17 +0000 (0:00:11.618) 0:01:27.240 ******* 2026-02-16 04:36:18.114727 | orchestrator | =============================================================================== 2026-02-16 04:36:18.114742 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 39.09s 2026-02-16 04:36:18.114756 | orchestrator | osism.services.netdata : Add repository -------------------------------- 14.80s 2026-02-16 04:36:18.114769 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.62s 2026-02-16 04:36:18.114782 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.35s 2026-02-16 04:36:18.114795 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.34s 2026-02-16 04:36:18.114809 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.08s 2026-02-16 04:36:18.114821 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.81s 2026-02-16 04:36:18.114834 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.61s 2026-02-16 04:36:18.114847 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.54s 2026-02-16 04:36:18.114892 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.45s 2026-02-16 04:36:18.114908 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.36s 2026-02-16 04:36:18.114922 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.32s 2026-02-16 04:36:18.114935 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.31s 2026-02-16 04:36:18.114949 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.24s 2026-02-16 04:36:18.114964 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.21s 2026-02-16 04:36:18.114979 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.89s 2026-02-16 04:36:20.557485 | orchestrator | 2026-02-16 04:36:20 | INFO  | Task 6b0cfe95-cc63-496a-8237-7b555c1dafd0 (prometheus) was prepared for execution. 2026-02-16 04:36:20.557566 | orchestrator | 2026-02-16 04:36:20 | INFO  | It takes a moment until task 6b0cfe95-cc63-496a-8237-7b555c1dafd0 (prometheus) has been started and output is visible here. 2026-02-16 04:36:30.243721 | orchestrator | 2026-02-16 04:36:30.243840 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 04:36:30.243853 | orchestrator | 2026-02-16 04:36:30.243861 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 04:36:30.243868 | orchestrator | Monday 16 February 2026 04:36:24 +0000 (0:00:00.287) 0:00:00.287 ******* 2026-02-16 04:36:30.243876 | orchestrator | ok: [testbed-manager] 2026-02-16 04:36:30.243893 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:36:30.243901 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:36:30.243907 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:36:30.243914 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:36:30.243921 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:36:30.243929 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:36:30.243936 | orchestrator | 2026-02-16 04:36:30.243943 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 04:36:30.243950 | orchestrator | Monday 16 February 2026 04:36:25 +0000 (0:00:00.856) 0:00:01.144 ******* 2026-02-16 04:36:30.243958 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-16 04:36:30.243973 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-16 04:36:30.243980 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-16 04:36:30.243987 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-16 04:36:30.243994 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-16 04:36:30.244000 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-16 04:36:30.244007 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-16 04:36:30.244014 | orchestrator | 2026-02-16 04:36:30.244021 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-16 04:36:30.244028 | orchestrator | 2026-02-16 04:36:30.244034 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-16 04:36:30.244041 | orchestrator | Monday 16 February 2026 04:36:26 +0000 (0:00:00.937) 0:00:02.081 ******* 2026-02-16 04:36:30.244049 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 04:36:30.244056 | orchestrator | 2026-02-16 04:36:30.244063 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-16 04:36:30.244070 | orchestrator | Monday 16 February 2026 04:36:28 +0000 (0:00:01.438) 0:00:03.520 ******* 2026-02-16 04:36:30.244080 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-16 04:36:30.244108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:30.244117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:30.244169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:30.244199 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:30.244208 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:30.244215 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:30.244222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:30.244235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:30.244243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:30.244250 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:30.244263 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:31.212756 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:31.212851 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:31.212865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:31.212897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:31.212911 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-16 04:36:31.212924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:31.212950 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-16 04:36:31.212967 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-16 04:36:31.212978 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:31.212989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:31.213005 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:31.213015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:31.213026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:31.213035 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-16 04:36:31.213058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:35.949965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:35.950197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:35.950246 | orchestrator | 2026-02-16 04:36:35.950262 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-16 04:36:35.950275 | orchestrator | Monday 16 February 2026 04:36:31 +0000 (0:00:03.036) 0:00:06.556 ******* 2026-02-16 04:36:35.950288 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 04:36:35.950301 | orchestrator | 2026-02-16 04:36:35.950313 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-16 04:36:35.950324 | orchestrator | Monday 16 February 2026 04:36:32 +0000 (0:00:01.606) 0:00:08.163 ******* 2026-02-16 04:36:35.950337 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-16 04:36:35.950350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:35.950362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:35.950373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:35.950420 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:35.950433 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:35.950453 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:35.950465 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:35.950476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:35.950508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:35.950521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:35.950546 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:35.950575 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:38.174478 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:38.174650 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:38.174688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:38.174720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:38.174738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:38.174795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-16 04:36:38.174829 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-16 04:36:38.174893 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-16 04:36:38.174912 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-16 04:36:38.174929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:38.174944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:38.174958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:38.174972 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:38.174992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:38.175036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:38.985873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:38.985978 | orchestrator | 2026-02-16 04:36:38.985996 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-16 04:36:38.986096 | orchestrator | Monday 16 February 2026 04:36:38 +0000 (0:00:05.349) 0:00:13.512 ******* 2026-02-16 04:36:38.986186 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-16 04:36:38.986212 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 04:36:38.986233 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 04:36:38.986269 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-16 04:36:38.986348 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 04:36:38.986370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 04:36:38.986392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 04:36:38.986412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 04:36:38.986434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 04:36:38.986455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 04:36:38.986489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 04:36:38.986516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 04:36:38.986550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 04:36:39.617176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 04:36:39.617310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 04:36:39.617340 | orchestrator | skipping: [testbed-manager] 2026-02-16 04:36:39.617375 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:36:39.617398 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:36:39.617411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 04:36:39.617423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 04:36:39.617436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 04:36:39.617512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 04:36:39.617527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 04:36:39.617538 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:36:39.617570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 04:36:39.617582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 04:36:39.617594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-16 04:36:39.617606 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:36:39.617617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 04:36:39.617629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 04:36:39.617649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-16 04:36:39.617663 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:36:39.617681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 04:36:39.617704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 04:36:40.455862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-16 04:36:40.455973 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:36:40.455993 | orchestrator | 2026-02-16 04:36:40.456009 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-16 04:36:40.456024 | orchestrator | Monday 16 February 2026 04:36:39 +0000 (0:00:01.443) 0:00:14.956 ******* 2026-02-16 04:36:40.456036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 04:36:40.456051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 04:36:40.456090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 04:36:40.456119 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-16 04:36:40.456166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 04:36:40.456201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 04:36:40.456216 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 04:36:40.456231 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 04:36:40.456247 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-16 04:36:40.456272 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 04:36:40.456293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 04:36:40.456307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 04:36:40.456328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 04:36:41.682152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 04:36:41.682233 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:36:41.682245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 04:36:41.682268 | orchestrator | skipping: [testbed-manager] 2026-02-16 04:36:41.682274 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:36:41.682281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 04:36:41.682287 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 04:36:41.682294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-16 04:36:41.682300 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:36:41.682316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 04:36:41.682323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 04:36:41.682341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 04:36:41.682348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 04:36:41.682360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 04:36:41.682366 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:36:41.682372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 04:36:41.682378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 04:36:41.682388 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 04:36:41.682394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-16 04:36:41.682404 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 04:36:45.219516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-16 04:36:45.219635 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:36:45.219649 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:36:45.219658 | orchestrator | 2026-02-16 04:36:45.219667 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-16 04:36:45.219676 | orchestrator | Monday 16 February 2026 04:36:41 +0000 (0:00:02.057) 0:00:17.013 ******* 2026-02-16 04:36:45.219687 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-16 04:36:45.219697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:45.219706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:45.219725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:45.219734 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:45.219758 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:45.219767 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:45.219782 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:36:45.219791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:45.219800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:45.219808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:45.219822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:45.219832 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:45.219848 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:47.832443 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:47.832533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:47.832546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:47.832558 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-16 04:36:47.832587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:47.832600 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-16 04:36:47.832613 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-16 04:36:47.832661 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-16 04:36:47.832672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:47.832679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:47.832686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:36:47.832698 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:47.832705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:47.832713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:47.832732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:36:51.509847 | orchestrator | 2026-02-16 04:36:51.509936 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-16 04:36:51.509946 | orchestrator | Monday 16 February 2026 04:36:47 +0000 (0:00:06.153) 0:00:23.167 ******* 2026-02-16 04:36:51.509951 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-16 04:36:51.509956 | orchestrator | 2026-02-16 04:36:51.509960 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-16 04:36:51.509965 | orchestrator | Monday 16 February 2026 04:36:48 +0000 (0:00:00.868) 0:00:24.036 ******* 2026-02-16 04:36:51.509970 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1080730, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.792241, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:51.509978 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1080730, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.792241, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:51.509983 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1080755, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7975976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:51.509998 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1080730, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.792241, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 04:36:51.510035 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1080755, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7975976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:51.510039 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1080730, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.792241, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:51.510056 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1080730, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.792241, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:51.510061 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1080730, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.792241, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:51.510065 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1080730, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.792241, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:51.510069 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1080721, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7914088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:51.510076 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1080721, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7914088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:51.510085 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1080755, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7975976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:51.510089 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1080755, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7975976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:51.510102 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1080755, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7975976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:53.163816 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1080745, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7963562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:53.163930 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1080755, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7975976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:53.163950 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1080745, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7963562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:53.163983 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1080721, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7914088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:53.164021 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1080755, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7975976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 04:36:53.164037 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1080715, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7901402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:53.164051 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1080715, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7901402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:53.164086 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1080721, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7914088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:53.164100 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1080721, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7914088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:53.164115 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1080721, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7914088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:53.164188 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1080745, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7963562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:53.164228 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1080731, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7927113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:53.164243 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1080745, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7963562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:53.164258 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1080731, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7927113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:53.164283 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1080745, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7963562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:54.499597 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1080745, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7963562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:54.499676 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1080715, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7901402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:54.499715 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1080715, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7901402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:54.499723 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1080715, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7901402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:54.499729 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1080743, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.794852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:54.499736 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1080743, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.794852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:54.499742 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1080715, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7901402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:54.499761 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1080731, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7927113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:54.499768 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1080734, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7930958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:54.499784 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1080731, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7927113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:54.499790 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1080731, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7927113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:54.499797 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1080734, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7930958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:54.499803 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1080721, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7914088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 04:36:54.499810 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1080731, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7927113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:54.499821 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1080743, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.794852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:55.907806 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1080728, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.792241, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:55.907931 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1080728, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.792241, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:55.907942 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1080743, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.794852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:55.907951 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1080743, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.794852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:55.907959 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1080743, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.794852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:55.907967 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080752, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7971528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:55.907976 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080752, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7971528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:55.907999 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1080734, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7930958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:55.908014 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1080734, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7930958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:55.908020 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080711, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7889433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:55.908027 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1080734, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7930958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:55.908033 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1080734, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7930958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:55.908039 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1080728, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.792241, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:55.908045 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080711, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7889433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:55.908064 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1080728, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.792241, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:57.145552 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080752, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7971528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:57.145650 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1080766, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.799568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:57.145659 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1080728, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.792241, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:57.145663 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1080745, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7963562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 04:36:57.145668 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1080728, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.792241, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:57.145672 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1080751, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7968383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:57.145696 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1080766, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.799568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:57.145720 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080752, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7971528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:57.145736 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080711, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7889433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:57.145744 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080752, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7971528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:57.145751 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080752, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7971528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:57.145758 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1080751, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7968383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:57.145765 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080719, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7905347, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:57.145778 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1080766, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.799568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:57.145794 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080711, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7889433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:58.462273 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080711, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7889433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:58.462382 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080719, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7905347, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:58.462399 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1080714, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7898362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:58.462412 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1080766, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.799568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:58.462447 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1080715, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7901402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 04:36:58.462459 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1080751, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7968383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:58.462487 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080711, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7889433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:58.462519 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1080741, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.794852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:58.462564 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1080714, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7898362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:58.462576 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1080766, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.799568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:58.462588 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1080751, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7968383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:58.462608 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1080766, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.799568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:58.462622 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1080738, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.79414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:58.462641 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080719, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7905347, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:58.462664 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080719, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7905347, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:59.674262 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1080764, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7990994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:59.674394 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:36:59.674426 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1080751, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7968383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:59.674451 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1080741, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.794852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:59.674503 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1080714, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7898362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:59.674517 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1080751, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7968383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:59.674543 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1080714, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7898362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:59.674578 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1080731, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7927113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 04:36:59.674591 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1080738, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.79414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:59.674602 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080719, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7905347, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:59.674624 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080719, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7905347, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:59.674637 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1080741, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.794852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:59.674650 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1080741, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.794852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:59.674670 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1080764, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7990994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:36:59.674684 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:36:59.674704 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1080738, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.79414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:37:05.216039 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1080714, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7898362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:37:05.216196 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1080714, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7898362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:37:05.216233 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1080738, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.79414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:37:05.216245 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1080764, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7990994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:37:05.216255 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:37:05.216269 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1080741, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.794852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:37:05.216302 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1080743, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.794852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 04:37:05.216319 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1080764, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7990994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:37:05.216353 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:37:05.216369 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1080741, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.794852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:37:05.216384 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1080738, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.79414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:37:05.216411 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1080738, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.79414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:37:05.216426 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1080764, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7990994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:37:05.216441 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:37:05.216456 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1080764, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7990994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-16 04:37:05.216466 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:37:05.216480 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1080734, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7930958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 04:37:05.216498 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1080728, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.792241, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 04:37:15.437662 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080752, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7971528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 04:37:15.437795 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080711, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7889433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 04:37:15.437807 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1080766, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.799568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 04:37:15.437814 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1080751, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7968383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 04:37:15.437823 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1080719, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7905347, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 04:37:15.437842 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1080714, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7898362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 04:37:15.437850 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1080741, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.794852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 04:37:15.437870 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1080738, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.79414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 04:37:15.437927 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1080764, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7990994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-16 04:37:15.437937 | orchestrator | 2026-02-16 04:37:15.437945 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-16 04:37:15.437955 | orchestrator | Monday 16 February 2026 04:37:12 +0000 (0:00:24.040) 0:00:48.077 ******* 2026-02-16 04:37:15.437963 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-16 04:37:15.437971 | orchestrator | 2026-02-16 04:37:15.437978 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-16 04:37:15.437985 | orchestrator | Monday 16 February 2026 04:37:13 +0000 (0:00:00.749) 0:00:48.826 ******* 2026-02-16 04:37:15.437992 | orchestrator | [WARNING]: Skipped 2026-02-16 04:37:15.437999 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-16 04:37:15.438007 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-16 04:37:15.438014 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-16 04:37:15.438065 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-16 04:37:15.438072 | orchestrator | [WARNING]: Skipped 2026-02-16 04:37:15.438079 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-16 04:37:15.438086 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-16 04:37:15.438092 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-16 04:37:15.438099 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-16 04:37:15.438106 | orchestrator | [WARNING]: Skipped 2026-02-16 04:37:15.438113 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-16 04:37:15.438119 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-16 04:37:15.438147 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-16 04:37:15.438156 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-16 04:37:15.438162 | orchestrator | [WARNING]: Skipped 2026-02-16 04:37:15.438169 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-16 04:37:15.438176 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-16 04:37:15.438183 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-16 04:37:15.438189 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-16 04:37:15.438196 | orchestrator | [WARNING]: Skipped 2026-02-16 04:37:15.438203 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-16 04:37:15.438209 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-16 04:37:15.438217 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-16 04:37:15.438224 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-16 04:37:15.438231 | orchestrator | [WARNING]: Skipped 2026-02-16 04:37:15.438239 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-16 04:37:15.438253 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-16 04:37:15.438261 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-16 04:37:15.438274 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-16 04:37:15.438281 | orchestrator | [WARNING]: Skipped 2026-02-16 04:37:15.438289 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-16 04:37:15.438296 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-16 04:37:15.438304 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-16 04:37:15.438311 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-16 04:37:15.438319 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-16 04:37:15.438326 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 04:37:15.438334 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-16 04:37:15.438341 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-16 04:37:15.438351 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-16 04:37:15.438362 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-16 04:37:15.438373 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-16 04:37:15.438384 | orchestrator | 2026-02-16 04:37:15.438403 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-16 04:37:46.313107 | orchestrator | Monday 16 February 2026 04:37:15 +0000 (0:00:01.944) 0:00:50.771 ******* 2026-02-16 04:37:46.313201 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-16 04:37:46.313209 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:37:46.313215 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-16 04:37:46.313219 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:37:46.313224 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-16 04:37:46.313228 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:37:46.313232 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-16 04:37:46.313236 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:37:46.313240 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-16 04:37:46.313244 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:37:46.313248 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-16 04:37:46.313252 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:37:46.313256 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-16 04:37:46.313260 | orchestrator | 2026-02-16 04:37:46.313265 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-16 04:37:46.313269 | orchestrator | Monday 16 February 2026 04:37:32 +0000 (0:00:16.999) 0:01:07.771 ******* 2026-02-16 04:37:46.313273 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-16 04:37:46.313277 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:37:46.313281 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-16 04:37:46.313285 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:37:46.313289 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-16 04:37:46.313293 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:37:46.313297 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-16 04:37:46.313301 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:37:46.313305 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-16 04:37:46.313322 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:37:46.313327 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-16 04:37:46.313331 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:37:46.313335 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-16 04:37:46.313339 | orchestrator | 2026-02-16 04:37:46.313343 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-16 04:37:46.313347 | orchestrator | Monday 16 February 2026 04:37:35 +0000 (0:00:02.895) 0:01:10.666 ******* 2026-02-16 04:37:46.313351 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-16 04:37:46.313356 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:37:46.313360 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-16 04:37:46.313364 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:37:46.313368 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-16 04:37:46.313372 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:37:46.313376 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-16 04:37:46.313380 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:37:46.313384 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-16 04:37:46.313399 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-16 04:37:46.313403 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:37:46.313407 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-16 04:37:46.313411 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:37:46.313415 | orchestrator | 2026-02-16 04:37:46.313419 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-16 04:37:46.313423 | orchestrator | Monday 16 February 2026 04:37:37 +0000 (0:00:01.840) 0:01:12.507 ******* 2026-02-16 04:37:46.313427 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-16 04:37:46.313431 | orchestrator | 2026-02-16 04:37:46.313435 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-16 04:37:46.313440 | orchestrator | Monday 16 February 2026 04:37:37 +0000 (0:00:00.712) 0:01:13.220 ******* 2026-02-16 04:37:46.313443 | orchestrator | skipping: [testbed-manager] 2026-02-16 04:37:46.313447 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:37:46.313451 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:37:46.313455 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:37:46.313469 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:37:46.313473 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:37:46.313477 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:37:46.313481 | orchestrator | 2026-02-16 04:37:46.313485 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-16 04:37:46.313489 | orchestrator | Monday 16 February 2026 04:37:38 +0000 (0:00:00.745) 0:01:13.965 ******* 2026-02-16 04:37:46.313493 | orchestrator | skipping: [testbed-manager] 2026-02-16 04:37:46.313497 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:37:46.313500 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:37:46.313504 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:37:46.313508 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:37:46.313512 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:37:46.313516 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:37:46.313520 | orchestrator | 2026-02-16 04:37:46.313524 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-16 04:37:46.313532 | orchestrator | Monday 16 February 2026 04:37:40 +0000 (0:00:02.015) 0:01:15.981 ******* 2026-02-16 04:37:46.313536 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-16 04:37:46.313540 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-16 04:37:46.313544 | orchestrator | skipping: [testbed-manager] 2026-02-16 04:37:46.313548 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-16 04:37:46.313551 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-16 04:37:46.313555 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-16 04:37:46.313559 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:37:46.313563 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:37:46.313567 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:37:46.313571 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:37:46.313575 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-16 04:37:46.313579 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:37:46.313582 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-16 04:37:46.313586 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:37:46.313590 | orchestrator | 2026-02-16 04:37:46.313594 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-16 04:37:46.313598 | orchestrator | Monday 16 February 2026 04:37:42 +0000 (0:00:01.510) 0:01:17.491 ******* 2026-02-16 04:37:46.313602 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-16 04:37:46.313606 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:37:46.313610 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-16 04:37:46.313614 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:37:46.313617 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-16 04:37:46.313621 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:37:46.313625 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-16 04:37:46.313629 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:37:46.313633 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-16 04:37:46.313637 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:37:46.313641 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-16 04:37:46.313645 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:37:46.313648 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-16 04:37:46.313652 | orchestrator | 2026-02-16 04:37:46.313656 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-16 04:37:46.313660 | orchestrator | Monday 16 February 2026 04:37:43 +0000 (0:00:01.497) 0:01:18.989 ******* 2026-02-16 04:37:46.313664 | orchestrator | [WARNING]: Skipped 2026-02-16 04:37:46.313669 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-16 04:37:46.313673 | orchestrator | due to this access issue: 2026-02-16 04:37:46.313678 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-16 04:37:46.313682 | orchestrator | not a directory 2026-02-16 04:37:46.313690 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-16 04:37:46.313694 | orchestrator | 2026-02-16 04:37:46.313699 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-16 04:37:46.313707 | orchestrator | Monday 16 February 2026 04:37:44 +0000 (0:00:01.159) 0:01:20.149 ******* 2026-02-16 04:37:46.313711 | orchestrator | skipping: [testbed-manager] 2026-02-16 04:37:46.313715 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:37:46.313720 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:37:46.313724 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:37:46.313728 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:37:46.313733 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:37:46.313737 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:37:46.313741 | orchestrator | 2026-02-16 04:37:46.313746 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-16 04:37:46.313751 | orchestrator | Monday 16 February 2026 04:37:45 +0000 (0:00:00.970) 0:01:21.119 ******* 2026-02-16 04:37:46.313755 | orchestrator | skipping: [testbed-manager] 2026-02-16 04:37:46.313760 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:37:46.313764 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:37:46.313771 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:37:49.172017 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:37:49.172123 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:37:49.172184 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:37:49.172196 | orchestrator | 2026-02-16 04:37:49.172209 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-16 04:37:49.172222 | orchestrator | Monday 16 February 2026 04:37:46 +0000 (0:00:00.969) 0:01:22.088 ******* 2026-02-16 04:37:49.172236 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-16 04:37:49.172253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:37:49.172266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:37:49.172278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:37:49.172289 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:37:49.172342 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:37:49.172373 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:37:49.172386 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-16 04:37:49.172398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:37:49.172409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:37:49.172421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:37:49.172433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:37:49.172453 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:37:49.172470 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:37:49.172490 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:37:52.729037 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-16 04:37:52.729199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:37:52.729219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:37:52.729231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:37:52.729269 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-16 04:37:52.729298 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-16 04:37:52.729314 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-16 04:37:52.729346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:37:52.729360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:37:52.729372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-16 04:37:52.729384 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:37:52.729405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:37:52.729423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:37:52.729435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 04:37:52.729447 | orchestrator | 2026-02-16 04:37:52.729461 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-16 04:37:52.729474 | orchestrator | Monday 16 February 2026 04:37:50 +0000 (0:00:04.004) 0:01:26.092 ******* 2026-02-16 04:37:52.729485 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-16 04:37:52.729496 | orchestrator | skipping: [testbed-manager] 2026-02-16 04:37:52.729507 | orchestrator | 2026-02-16 04:37:52.729526 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-16 04:39:48.904611 | orchestrator | Monday 16 February 2026 04:37:52 +0000 (0:00:01.270) 0:01:27.363 ******* 2026-02-16 04:39:48.904721 | orchestrator | 2026-02-16 04:39:48.904737 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-16 04:39:48.904749 | orchestrator | Monday 16 February 2026 04:37:52 +0000 (0:00:00.261) 0:01:27.625 ******* 2026-02-16 04:39:48.904759 | orchestrator | 2026-02-16 04:39:48.904769 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-16 04:39:48.904779 | orchestrator | Monday 16 February 2026 04:37:52 +0000 (0:00:00.073) 0:01:27.698 ******* 2026-02-16 04:39:48.904789 | orchestrator | 2026-02-16 04:39:48.904799 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-16 04:39:48.904808 | orchestrator | Monday 16 February 2026 04:37:52 +0000 (0:00:00.071) 0:01:27.770 ******* 2026-02-16 04:39:48.904818 | orchestrator | 2026-02-16 04:39:48.904828 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-16 04:39:48.904837 | orchestrator | Monday 16 February 2026 04:37:52 +0000 (0:00:00.066) 0:01:27.837 ******* 2026-02-16 04:39:48.904847 | orchestrator | 2026-02-16 04:39:48.904857 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-16 04:39:48.904867 | orchestrator | Monday 16 February 2026 04:37:52 +0000 (0:00:00.071) 0:01:27.908 ******* 2026-02-16 04:39:48.904876 | orchestrator | 2026-02-16 04:39:48.904907 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-16 04:39:48.904918 | orchestrator | Monday 16 February 2026 04:37:52 +0000 (0:00:00.067) 0:01:27.976 ******* 2026-02-16 04:39:48.904928 | orchestrator | 2026-02-16 04:39:48.904937 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-16 04:39:48.904947 | orchestrator | Monday 16 February 2026 04:37:52 +0000 (0:00:00.093) 0:01:28.069 ******* 2026-02-16 04:39:48.904957 | orchestrator | changed: [testbed-manager] 2026-02-16 04:39:48.904967 | orchestrator | 2026-02-16 04:39:48.904977 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-16 04:39:48.904987 | orchestrator | Monday 16 February 2026 04:38:15 +0000 (0:00:22.417) 0:01:50.487 ******* 2026-02-16 04:39:48.904997 | orchestrator | changed: [testbed-manager] 2026-02-16 04:39:48.905006 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:39:48.905016 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:39:48.905026 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:39:48.905035 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:39:48.905045 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:39:48.905055 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:39:48.905065 | orchestrator | 2026-02-16 04:39:48.905074 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-16 04:39:48.905084 | orchestrator | Monday 16 February 2026 04:38:28 +0000 (0:00:13.584) 0:02:04.071 ******* 2026-02-16 04:39:48.905094 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:39:48.905103 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:39:48.905140 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:39:48.905152 | orchestrator | 2026-02-16 04:39:48.905164 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-16 04:39:48.905175 | orchestrator | Monday 16 February 2026 04:38:39 +0000 (0:00:10.588) 0:02:14.659 ******* 2026-02-16 04:39:48.905186 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:39:48.905197 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:39:48.905209 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:39:48.905220 | orchestrator | 2026-02-16 04:39:48.905231 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-16 04:39:48.905242 | orchestrator | Monday 16 February 2026 04:38:49 +0000 (0:00:10.474) 0:02:25.133 ******* 2026-02-16 04:39:48.905253 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:39:48.905264 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:39:48.905274 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:39:48.905285 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:39:48.905296 | orchestrator | changed: [testbed-manager] 2026-02-16 04:39:48.905307 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:39:48.905317 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:39:48.905328 | orchestrator | 2026-02-16 04:39:48.905339 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-16 04:39:48.905350 | orchestrator | Monday 16 February 2026 04:39:03 +0000 (0:00:14.154) 0:02:39.288 ******* 2026-02-16 04:39:48.905361 | orchestrator | changed: [testbed-manager] 2026-02-16 04:39:48.905372 | orchestrator | 2026-02-16 04:39:48.905396 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-16 04:39:48.905408 | orchestrator | Monday 16 February 2026 04:39:17 +0000 (0:00:13.546) 0:02:52.834 ******* 2026-02-16 04:39:48.905419 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:39:48.905431 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:39:48.905441 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:39:48.905453 | orchestrator | 2026-02-16 04:39:48.905464 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-16 04:39:48.905475 | orchestrator | Monday 16 February 2026 04:39:27 +0000 (0:00:10.355) 0:03:03.189 ******* 2026-02-16 04:39:48.905487 | orchestrator | changed: [testbed-manager] 2026-02-16 04:39:48.905497 | orchestrator | 2026-02-16 04:39:48.905509 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-16 04:39:48.905529 | orchestrator | Monday 16 February 2026 04:39:38 +0000 (0:00:10.435) 0:03:13.625 ******* 2026-02-16 04:39:48.905538 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:39:48.905548 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:39:48.905558 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:39:48.905567 | orchestrator | 2026-02-16 04:39:48.905577 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:39:48.905588 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-16 04:39:48.905599 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-16 04:39:48.905625 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-16 04:39:48.905635 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-16 04:39:48.905645 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-16 04:39:48.905655 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-16 04:39:48.905664 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-16 04:39:48.905674 | orchestrator | 2026-02-16 04:39:48.905684 | orchestrator | 2026-02-16 04:39:48.905694 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:39:48.905703 | orchestrator | Monday 16 February 2026 04:39:48 +0000 (0:00:10.085) 0:03:23.710 ******* 2026-02-16 04:39:48.905713 | orchestrator | =============================================================================== 2026-02-16 04:39:48.905723 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.04s 2026-02-16 04:39:48.905732 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 22.42s 2026-02-16 04:39:48.905742 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.00s 2026-02-16 04:39:48.905751 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.15s 2026-02-16 04:39:48.905761 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.58s 2026-02-16 04:39:48.905770 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 13.55s 2026-02-16 04:39:48.905780 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.59s 2026-02-16 04:39:48.905789 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.47s 2026-02-16 04:39:48.905799 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.44s 2026-02-16 04:39:48.905808 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.36s 2026-02-16 04:39:48.905818 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.09s 2026-02-16 04:39:48.905827 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.15s 2026-02-16 04:39:48.905837 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.35s 2026-02-16 04:39:48.905846 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.00s 2026-02-16 04:39:48.905856 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.04s 2026-02-16 04:39:48.905867 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.90s 2026-02-16 04:39:48.905882 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.06s 2026-02-16 04:39:48.905898 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.02s 2026-02-16 04:39:48.905923 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.94s 2026-02-16 04:39:48.905937 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.84s 2026-02-16 04:39:53.576928 | orchestrator | 2026-02-16 04:39:53 | INFO  | Task 91d0468f-58ca-4d2a-a6c2-43e15a8af15b (grafana) was prepared for execution. 2026-02-16 04:39:53.577061 | orchestrator | 2026-02-16 04:39:53 | INFO  | It takes a moment until task 91d0468f-58ca-4d2a-a6c2-43e15a8af15b (grafana) has been started and output is visible here. 2026-02-16 04:40:03.447785 | orchestrator | 2026-02-16 04:40:03.447911 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 04:40:03.447928 | orchestrator | 2026-02-16 04:40:03.447941 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 04:40:03.447953 | orchestrator | Monday 16 February 2026 04:39:57 +0000 (0:00:00.291) 0:00:00.291 ******* 2026-02-16 04:40:03.447965 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:40:03.447978 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:40:03.447989 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:40:03.448000 | orchestrator | 2026-02-16 04:40:03.448011 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 04:40:03.448022 | orchestrator | Monday 16 February 2026 04:39:58 +0000 (0:00:00.321) 0:00:00.613 ******* 2026-02-16 04:40:03.448034 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-16 04:40:03.448045 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-16 04:40:03.448056 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-16 04:40:03.448067 | orchestrator | 2026-02-16 04:40:03.448078 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-16 04:40:03.448089 | orchestrator | 2026-02-16 04:40:03.448100 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-16 04:40:03.448185 | orchestrator | Monday 16 February 2026 04:39:58 +0000 (0:00:00.472) 0:00:01.085 ******* 2026-02-16 04:40:03.448198 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:40:03.448211 | orchestrator | 2026-02-16 04:40:03.448222 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-16 04:40:03.448233 | orchestrator | Monday 16 February 2026 04:39:59 +0000 (0:00:00.570) 0:00:01.656 ******* 2026-02-16 04:40:03.448248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-16 04:40:03.448264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-16 04:40:03.448277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-16 04:40:03.448312 | orchestrator | 2026-02-16 04:40:03.448325 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-16 04:40:03.448339 | orchestrator | Monday 16 February 2026 04:40:00 +0000 (0:00:00.912) 0:00:02.569 ******* 2026-02-16 04:40:03.448351 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-16 04:40:03.448364 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-16 04:40:03.448377 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 04:40:03.448390 | orchestrator | 2026-02-16 04:40:03.448403 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-16 04:40:03.448415 | orchestrator | Monday 16 February 2026 04:40:00 +0000 (0:00:00.855) 0:00:03.424 ******* 2026-02-16 04:40:03.448428 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:40:03.448441 | orchestrator | 2026-02-16 04:40:03.448454 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-16 04:40:03.448467 | orchestrator | Monday 16 February 2026 04:40:01 +0000 (0:00:00.572) 0:00:03.997 ******* 2026-02-16 04:40:03.448506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-16 04:40:03.448521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-16 04:40:03.448535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-16 04:40:03.448548 | orchestrator | 2026-02-16 04:40:03.448562 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-16 04:40:03.448574 | orchestrator | Monday 16 February 2026 04:40:02 +0000 (0:00:01.330) 0:00:05.327 ******* 2026-02-16 04:40:03.448597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-16 04:40:03.448611 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:40:03.448624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-16 04:40:03.448637 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:40:03.448666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-16 04:40:10.302757 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:40:10.302863 | orchestrator | 2026-02-16 04:40:10.302876 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-16 04:40:10.302887 | orchestrator | Monday 16 February 2026 04:40:03 +0000 (0:00:00.576) 0:00:05.904 ******* 2026-02-16 04:40:10.302897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-16 04:40:10.302908 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:40:10.302917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-16 04:40:10.302945 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:40:10.302955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-16 04:40:10.302963 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:40:10.302971 | orchestrator | 2026-02-16 04:40:10.302980 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-16 04:40:10.302988 | orchestrator | Monday 16 February 2026 04:40:04 +0000 (0:00:00.619) 0:00:06.523 ******* 2026-02-16 04:40:10.302996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-16 04:40:10.303017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-16 04:40:10.303041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-16 04:40:10.303050 | orchestrator | 2026-02-16 04:40:10.303059 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-16 04:40:10.303067 | orchestrator | Monday 16 February 2026 04:40:05 +0000 (0:00:01.250) 0:00:07.774 ******* 2026-02-16 04:40:10.303075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-16 04:40:10.303091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-16 04:40:10.303100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-16 04:40:10.303174 | orchestrator | 2026-02-16 04:40:10.303187 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-16 04:40:10.303195 | orchestrator | Monday 16 February 2026 04:40:06 +0000 (0:00:01.607) 0:00:09.381 ******* 2026-02-16 04:40:10.303203 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:40:10.303211 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:40:10.303219 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:40:10.303227 | orchestrator | 2026-02-16 04:40:10.303235 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-16 04:40:10.303243 | orchestrator | Monday 16 February 2026 04:40:07 +0000 (0:00:00.351) 0:00:09.732 ******* 2026-02-16 04:40:10.303251 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-16 04:40:10.303261 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-16 04:40:10.303268 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-16 04:40:10.303276 | orchestrator | 2026-02-16 04:40:10.303284 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-16 04:40:10.303292 | orchestrator | Monday 16 February 2026 04:40:08 +0000 (0:00:01.284) 0:00:11.016 ******* 2026-02-16 04:40:10.303301 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-16 04:40:10.303310 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-16 04:40:10.303325 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-16 04:40:10.303335 | orchestrator | 2026-02-16 04:40:10.303345 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-16 04:40:10.303361 | orchestrator | Monday 16 February 2026 04:40:10 +0000 (0:00:01.733) 0:00:12.750 ******* 2026-02-16 04:40:16.886003 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 04:40:16.886203 | orchestrator | 2026-02-16 04:40:16.886223 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-16 04:40:16.886237 | orchestrator | Monday 16 February 2026 04:40:11 +0000 (0:00:00.815) 0:00:13.565 ******* 2026-02-16 04:40:16.886249 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-16 04:40:16.886261 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-16 04:40:16.886297 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:40:16.886309 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:40:16.886320 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:40:16.886331 | orchestrator | 2026-02-16 04:40:16.886342 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-16 04:40:16.886353 | orchestrator | Monday 16 February 2026 04:40:11 +0000 (0:00:00.695) 0:00:14.261 ******* 2026-02-16 04:40:16.886364 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:40:16.886375 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:40:16.886390 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:40:16.886409 | orchestrator | 2026-02-16 04:40:16.886428 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-16 04:40:16.886448 | orchestrator | Monday 16 February 2026 04:40:12 +0000 (0:00:00.346) 0:00:14.607 ******* 2026-02-16 04:40:16.886471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1080419, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7078304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:16.886496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1080419, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7078304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:16.886515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1080419, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7078304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:16.886527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1080502, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7238667, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:16.886575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1080502, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7238667, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:16.886601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1080502, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7238667, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:16.886620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1080444, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7114131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:16.886638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1080444, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7114131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:16.886654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1080444, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7114131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:16.886669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1080506, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.726851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:16.886707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1080506, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.726851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:16.886753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1080506, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.726851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:20.903680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1080473, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.714851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:20.903813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1080473, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.714851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:20.903842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1080473, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.714851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:20.903865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1080496, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.721851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:20.903886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1080496, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.721851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:20.903952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1080496, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.721851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:20.903998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1080414, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7052937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:20.904019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1080414, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7052937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:20.904039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1080414, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7052937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:20.904059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1080432, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7088509, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:20.904079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1080432, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7088509, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:20.904210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1080432, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7088509, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:20.904253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1080453, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7117026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:24.870860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1080453, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7117026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:24.870954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1080453, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7117026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:24.870968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1080480, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7171063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:24.870978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1080480, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7171063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:24.870987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1080480, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7171063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:24.871031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1080500, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7230084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:24.871058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1080500, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7230084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:24.871068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1080500, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7230084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:24.871077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1080436, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7098508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:24.871086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1080436, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7098508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:24.871095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1080436, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7098508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:24.871172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1080495, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7211778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:24.871191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1080495, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7211778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:29.061227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1080495, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7211778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:29.061321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1080477, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7159066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:29.061333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1080477, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7159066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:29.061342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1080477, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7159066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:29.061387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1080466, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.714794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:29.061397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1080466, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.714794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:29.061420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1080466, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.714794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:29.061429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1080462, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7128508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:29.061438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1080462, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7128508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:29.061446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1080462, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7128508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:29.061461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1080487, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7211778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:29.061473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1080487, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7211778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:29.061488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1080487, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7211778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:32.832689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1080457, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7126496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:32.832804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1080457, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7126496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:32.832821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1080457, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7126496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:32.832858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1080499, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.721851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:32.832885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1080499, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.721851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:32.832898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1080499, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.721851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:32.832929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1080693, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7878206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:32.832941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1080693, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7878206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:32.832953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1080693, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7878206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:32.832973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1080543, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7398512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:32.832990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1080543, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7398512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:32.833003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1080543, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7398512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:32.833023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1080522, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.73018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:37.045927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1080522, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.73018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:37.046054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1080522, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.73018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:37.046089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1080565, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7435806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:37.046097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1080565, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7435806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:37.046186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1080565, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7435806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:37.046195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1080515, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.727946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:37.046218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1080515, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.727946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:37.046225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1080515, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.727946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:37.046240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1080668, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7808518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:37.046248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1080668, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7808518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:37.046258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1080668, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7808518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:37.046265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1080567, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7790377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:37.046280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1080567, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7790377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:41.098904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1080567, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7790377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:41.099029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1080672, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7808518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:41.099043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1080672, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7808518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:41.099059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1080672, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7808518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:41.099065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1080688, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.785852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:41.099071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1080688, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.785852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:41.099087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1080688, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.785852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:41.099097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1080663, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7803643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:41.099139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1080663, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7803643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:41.099149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1080663, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7803643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:41.099153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1080560, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7418513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:41.099157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1080560, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7418513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:41.099167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1080560, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7418513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:44.714473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1080540, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.734427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:44.714569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1080540, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.734427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:44.714600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1080540, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.734427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:44.714612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1080558, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7415605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:44.714623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1080558, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7415605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:44.714633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1080558, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7415605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:44.714686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1080525, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7330284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:44.714699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1080525, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7330284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:44.714709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1080562, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7418513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:44.714726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1080525, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7330284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:44.714736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1080562, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7418513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:44.714746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1080678, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.785852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:44.714771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1080678, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.785852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:49.062571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1080562, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7418513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:49.062669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1080675, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.782852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:49.062696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1080675, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.782852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:49.062705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1080678, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.785852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:49.062713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1080518, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7282403, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:49.062742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1080518, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7282403, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:49.062764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1080675, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.782852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:49.062772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1080519, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.729165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:49.062783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1080519, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.729165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:49.062791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1080518, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7282403, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:49.062798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1080661, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7790377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:49.062812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1080661, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7790377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:40:49.062825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1080519, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.729165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:42:32.191906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1080673, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7820485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:42:32.192015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1080673, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7820485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:42:32.192026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1080661, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7790377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:42:32.192051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1080673, 'dev': 102, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771209565.7820485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-16 04:42:32.192058 | orchestrator | 2026-02-16 04:42:32.192066 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-16 04:42:32.192074 | orchestrator | Monday 16 February 2026 04:40:50 +0000 (0:00:38.497) 0:00:53.105 ******* 2026-02-16 04:42:32.192081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-16 04:42:32.192144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-16 04:42:32.192154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-16 04:42:32.192160 | orchestrator | 2026-02-16 04:42:32.192167 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-16 04:42:32.192174 | orchestrator | Monday 16 February 2026 04:40:51 +0000 (0:00:01.042) 0:00:54.147 ******* 2026-02-16 04:42:32.192180 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:42:32.192187 | orchestrator | 2026-02-16 04:42:32.192194 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-16 04:42:32.192200 | orchestrator | Monday 16 February 2026 04:40:54 +0000 (0:00:02.351) 0:00:56.499 ******* 2026-02-16 04:42:32.192210 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:42:32.192217 | orchestrator | 2026-02-16 04:42:32.192223 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-16 04:42:32.192229 | orchestrator | Monday 16 February 2026 04:40:56 +0000 (0:00:02.301) 0:00:58.800 ******* 2026-02-16 04:42:32.192236 | orchestrator | 2026-02-16 04:42:32.192242 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-16 04:42:32.192254 | orchestrator | Monday 16 February 2026 04:40:56 +0000 (0:00:00.074) 0:00:58.874 ******* 2026-02-16 04:42:32.192260 | orchestrator | 2026-02-16 04:42:32.192267 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-16 04:42:32.192273 | orchestrator | Monday 16 February 2026 04:40:56 +0000 (0:00:00.071) 0:00:58.945 ******* 2026-02-16 04:42:32.192279 | orchestrator | 2026-02-16 04:42:32.192286 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-16 04:42:32.192292 | orchestrator | Monday 16 February 2026 04:40:56 +0000 (0:00:00.072) 0:00:59.017 ******* 2026-02-16 04:42:32.192298 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:42:32.192305 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:42:32.192311 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:42:32.192317 | orchestrator | 2026-02-16 04:42:32.192324 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-16 04:42:32.192330 | orchestrator | Monday 16 February 2026 04:41:03 +0000 (0:00:07.172) 0:01:06.190 ******* 2026-02-16 04:42:32.192336 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:42:32.192342 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:42:32.192348 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-16 04:42:32.192355 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-16 04:42:32.192362 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-02-16 04:42:32.192368 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-02-16 04:42:32.192374 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:42:32.192382 | orchestrator | 2026-02-16 04:42:32.192388 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-16 04:42:32.192394 | orchestrator | Monday 16 February 2026 04:41:54 +0000 (0:00:50.800) 0:01:56.990 ******* 2026-02-16 04:42:32.192400 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:42:32.192409 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:42:32.192421 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:42:32.192431 | orchestrator | 2026-02-16 04:42:32.192443 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-16 04:42:32.192453 | orchestrator | Monday 16 February 2026 04:42:27 +0000 (0:00:32.474) 0:02:29.465 ******* 2026-02-16 04:42:32.192463 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:42:32.192473 | orchestrator | 2026-02-16 04:42:32.192484 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-16 04:42:32.192494 | orchestrator | Monday 16 February 2026 04:42:29 +0000 (0:00:02.256) 0:02:31.722 ******* 2026-02-16 04:42:32.192505 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:42:32.192516 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:42:32.192528 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:42:32.192540 | orchestrator | 2026-02-16 04:42:32.192552 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-16 04:42:32.192563 | orchestrator | Monday 16 February 2026 04:42:29 +0000 (0:00:00.310) 0:02:32.032 ******* 2026-02-16 04:42:32.192577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-16 04:42:32.192600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-16 04:42:32.855740 | orchestrator | 2026-02-16 04:42:32.855844 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-16 04:42:32.855887 | orchestrator | Monday 16 February 2026 04:42:32 +0000 (0:00:02.604) 0:02:34.636 ******* 2026-02-16 04:42:32.855900 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:42:32.855912 | orchestrator | 2026-02-16 04:42:32.855924 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:42:32.855936 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-16 04:42:32.855949 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-16 04:42:32.855960 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-16 04:42:32.855971 | orchestrator | 2026-02-16 04:42:32.855982 | orchestrator | 2026-02-16 04:42:32.855993 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:42:32.856004 | orchestrator | Monday 16 February 2026 04:42:32 +0000 (0:00:00.294) 0:02:34.931 ******* 2026-02-16 04:42:32.856015 | orchestrator | =============================================================================== 2026-02-16 04:42:32.856040 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.80s 2026-02-16 04:42:32.856052 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.50s 2026-02-16 04:42:32.856063 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 32.48s 2026-02-16 04:42:32.856074 | orchestrator | grafana : Restart first grafana container ------------------------------- 7.17s 2026-02-16 04:42:32.856085 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.60s 2026-02-16 04:42:32.856173 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.35s 2026-02-16 04:42:32.856188 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.30s 2026-02-16 04:42:32.856199 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.26s 2026-02-16 04:42:32.856210 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.73s 2026-02-16 04:42:32.856221 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.61s 2026-02-16 04:42:32.856232 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.33s 2026-02-16 04:42:32.856243 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.28s 2026-02-16 04:42:32.856253 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.25s 2026-02-16 04:42:32.856264 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.04s 2026-02-16 04:42:32.856276 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.91s 2026-02-16 04:42:32.856289 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.86s 2026-02-16 04:42:32.856301 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.82s 2026-02-16 04:42:32.856313 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.70s 2026-02-16 04:42:32.856326 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.62s 2026-02-16 04:42:32.856338 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.58s 2026-02-16 04:42:33.170557 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-02-16 04:42:33.176605 | orchestrator | + set -e 2026-02-16 04:42:33.176816 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-16 04:42:33.176854 | orchestrator | ++ export INTERACTIVE=false 2026-02-16 04:42:33.176872 | orchestrator | ++ INTERACTIVE=false 2026-02-16 04:42:33.176886 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-16 04:42:33.176896 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-16 04:42:33.176999 | orchestrator | + source /opt/manager-vars.sh 2026-02-16 04:42:33.177720 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-16 04:42:33.177755 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-16 04:42:33.177792 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-16 04:42:33.177802 | orchestrator | ++ CEPH_VERSION=reef 2026-02-16 04:42:33.177812 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-16 04:42:33.177823 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-16 04:42:33.177833 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-16 04:42:33.177843 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-16 04:42:33.177857 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-16 04:42:33.177873 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-16 04:42:33.177889 | orchestrator | ++ export ARA=false 2026-02-16 04:42:33.177904 | orchestrator | ++ ARA=false 2026-02-16 04:42:33.177918 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-16 04:42:33.177935 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-16 04:42:33.177949 | orchestrator | ++ export TEMPEST=false 2026-02-16 04:42:33.177964 | orchestrator | ++ TEMPEST=false 2026-02-16 04:42:33.177976 | orchestrator | ++ export IS_ZUUL=true 2026-02-16 04:42:33.177986 | orchestrator | ++ IS_ZUUL=true 2026-02-16 04:42:33.177995 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 04:42:33.178004 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 04:42:33.178055 | orchestrator | ++ export EXTERNAL_API=false 2026-02-16 04:42:33.178065 | orchestrator | ++ EXTERNAL_API=false 2026-02-16 04:42:33.178074 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-16 04:42:33.178082 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-16 04:42:33.178091 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-16 04:42:33.178126 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-16 04:42:33.178136 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-16 04:42:33.178144 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-16 04:42:33.178591 | orchestrator | ++ semver 9.5.0 8.0.0 2026-02-16 04:42:33.239219 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-16 04:42:33.239299 | orchestrator | + osism apply clusterapi 2026-02-16 04:42:35.350006 | orchestrator | 2026-02-16 04:42:35 | INFO  | Task db7e743c-0681-42d3-b050-a6e97aa27617 (clusterapi) was prepared for execution. 2026-02-16 04:42:35.350416 | orchestrator | 2026-02-16 04:42:35 | INFO  | It takes a moment until task db7e743c-0681-42d3-b050-a6e97aa27617 (clusterapi) has been started and output is visible here. 2026-02-16 04:43:36.466866 | orchestrator | 2026-02-16 04:43:36.466955 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-02-16 04:43:36.466968 | orchestrator | 2026-02-16 04:43:36.466977 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-02-16 04:43:36.466984 | orchestrator | Monday 16 February 2026 04:42:39 +0000 (0:00:00.186) 0:00:00.186 ******* 2026-02-16 04:43:36.466992 | orchestrator | included: cert_manager for testbed-manager 2026-02-16 04:43:36.466999 | orchestrator | 2026-02-16 04:43:36.467007 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-02-16 04:43:36.467015 | orchestrator | Monday 16 February 2026 04:42:39 +0000 (0:00:00.230) 0:00:00.417 ******* 2026-02-16 04:43:36.467023 | orchestrator | changed: [testbed-manager] 2026-02-16 04:43:36.467031 | orchestrator | 2026-02-16 04:43:36.467039 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-02-16 04:43:36.467047 | orchestrator | Monday 16 February 2026 04:42:45 +0000 (0:00:05.402) 0:00:05.820 ******* 2026-02-16 04:43:36.467054 | orchestrator | changed: [testbed-manager] 2026-02-16 04:43:36.467061 | orchestrator | 2026-02-16 04:43:36.467069 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-02-16 04:43:36.467076 | orchestrator | 2026-02-16 04:43:36.467081 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-02-16 04:43:36.467086 | orchestrator | Monday 16 February 2026 04:43:14 +0000 (0:00:29.435) 0:00:35.256 ******* 2026-02-16 04:43:36.467137 | orchestrator | ok: [testbed-manager] 2026-02-16 04:43:36.467144 | orchestrator | 2026-02-16 04:43:36.467150 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-02-16 04:43:36.467168 | orchestrator | Monday 16 February 2026 04:43:15 +0000 (0:00:01.199) 0:00:36.455 ******* 2026-02-16 04:43:36.467173 | orchestrator | ok: [testbed-manager] 2026-02-16 04:43:36.467178 | orchestrator | 2026-02-16 04:43:36.467183 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-02-16 04:43:36.467190 | orchestrator | Monday 16 February 2026 04:43:16 +0000 (0:00:00.171) 0:00:36.626 ******* 2026-02-16 04:43:36.467217 | orchestrator | ok: [testbed-manager] 2026-02-16 04:43:36.467225 | orchestrator | 2026-02-16 04:43:36.467232 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-02-16 04:43:36.467239 | orchestrator | Monday 16 February 2026 04:43:33 +0000 (0:00:17.849) 0:00:54.476 ******* 2026-02-16 04:43:36.467246 | orchestrator | skipping: [testbed-manager] 2026-02-16 04:43:36.467253 | orchestrator | 2026-02-16 04:43:36.467261 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-02-16 04:43:36.467268 | orchestrator | Monday 16 February 2026 04:43:34 +0000 (0:00:00.137) 0:00:54.613 ******* 2026-02-16 04:43:36.467276 | orchestrator | changed: [testbed-manager] 2026-02-16 04:43:36.467284 | orchestrator | 2026-02-16 04:43:36.467291 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:43:36.467300 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 04:43:36.467310 | orchestrator | 2026-02-16 04:43:36.467315 | orchestrator | 2026-02-16 04:43:36.467319 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:43:36.467324 | orchestrator | Monday 16 February 2026 04:43:36 +0000 (0:00:02.054) 0:00:56.667 ******* 2026-02-16 04:43:36.467328 | orchestrator | =============================================================================== 2026-02-16 04:43:36.467333 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 29.44s 2026-02-16 04:43:36.467338 | orchestrator | Initialize the CAPI management cluster --------------------------------- 17.85s 2026-02-16 04:43:36.467342 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.40s 2026-02-16 04:43:36.467347 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.05s 2026-02-16 04:43:36.467351 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.20s 2026-02-16 04:43:36.467356 | orchestrator | Include cert_manager role ----------------------------------------------- 0.23s 2026-02-16 04:43:36.467360 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.17s 2026-02-16 04:43:36.467365 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.14s 2026-02-16 04:43:36.772658 | orchestrator | + osism apply magnum 2026-02-16 04:43:38.834579 | orchestrator | 2026-02-16 04:43:38 | INFO  | Task b08912bd-3e09-4759-91b3-626f35997d97 (magnum) was prepared for execution. 2026-02-16 04:43:38.834654 | orchestrator | 2026-02-16 04:43:38 | INFO  | It takes a moment until task b08912bd-3e09-4759-91b3-626f35997d97 (magnum) has been started and output is visible here. 2026-02-16 04:44:22.322611 | orchestrator | 2026-02-16 04:44:22.322738 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 04:44:22.322757 | orchestrator | 2026-02-16 04:44:22.322771 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 04:44:22.322786 | orchestrator | Monday 16 February 2026 04:43:43 +0000 (0:00:00.280) 0:00:00.280 ******* 2026-02-16 04:44:22.322799 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:44:22.322814 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:44:22.322827 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:44:22.322841 | orchestrator | 2026-02-16 04:44:22.322854 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 04:44:22.322867 | orchestrator | Monday 16 February 2026 04:43:43 +0000 (0:00:00.342) 0:00:00.623 ******* 2026-02-16 04:44:22.322879 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-16 04:44:22.322892 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-16 04:44:22.322905 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-16 04:44:22.322919 | orchestrator | 2026-02-16 04:44:22.322932 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-16 04:44:22.322945 | orchestrator | 2026-02-16 04:44:22.322959 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-16 04:44:22.322998 | orchestrator | Monday 16 February 2026 04:43:43 +0000 (0:00:00.476) 0:00:01.100 ******* 2026-02-16 04:44:22.323012 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:44:22.323025 | orchestrator | 2026-02-16 04:44:22.323039 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-16 04:44:22.323054 | orchestrator | Monday 16 February 2026 04:43:44 +0000 (0:00:00.621) 0:00:01.722 ******* 2026-02-16 04:44:22.323068 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-16 04:44:22.323140 | orchestrator | 2026-02-16 04:44:22.323159 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-16 04:44:22.323175 | orchestrator | Monday 16 February 2026 04:43:48 +0000 (0:00:03.636) 0:00:05.358 ******* 2026-02-16 04:44:22.323191 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-16 04:44:22.323207 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-16 04:44:22.323223 | orchestrator | 2026-02-16 04:44:22.323240 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-16 04:44:22.323256 | orchestrator | Monday 16 February 2026 04:43:54 +0000 (0:00:06.564) 0:00:11.922 ******* 2026-02-16 04:44:22.323273 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-16 04:44:22.323288 | orchestrator | 2026-02-16 04:44:22.323303 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-16 04:44:22.323335 | orchestrator | Monday 16 February 2026 04:43:58 +0000 (0:00:03.415) 0:00:15.337 ******* 2026-02-16 04:44:22.323352 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-16 04:44:22.323370 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-16 04:44:22.323386 | orchestrator | 2026-02-16 04:44:22.323401 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-16 04:44:22.323415 | orchestrator | Monday 16 February 2026 04:44:02 +0000 (0:00:04.069) 0:00:19.406 ******* 2026-02-16 04:44:22.323430 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-16 04:44:22.323445 | orchestrator | 2026-02-16 04:44:22.323459 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-16 04:44:22.323472 | orchestrator | Monday 16 February 2026 04:44:05 +0000 (0:00:03.183) 0:00:22.589 ******* 2026-02-16 04:44:22.323486 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-16 04:44:22.323498 | orchestrator | 2026-02-16 04:44:22.323511 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-16 04:44:22.323523 | orchestrator | Monday 16 February 2026 04:44:09 +0000 (0:00:03.822) 0:00:26.412 ******* 2026-02-16 04:44:22.323536 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:44:22.323548 | orchestrator | 2026-02-16 04:44:22.323560 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-16 04:44:22.323572 | orchestrator | Monday 16 February 2026 04:44:12 +0000 (0:00:03.481) 0:00:29.894 ******* 2026-02-16 04:44:22.323585 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:44:22.323598 | orchestrator | 2026-02-16 04:44:22.323611 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-16 04:44:22.323624 | orchestrator | Monday 16 February 2026 04:44:16 +0000 (0:00:04.081) 0:00:33.976 ******* 2026-02-16 04:44:22.323636 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:44:22.323648 | orchestrator | 2026-02-16 04:44:22.323660 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-16 04:44:22.323672 | orchestrator | Monday 16 February 2026 04:44:20 +0000 (0:00:03.807) 0:00:37.783 ******* 2026-02-16 04:44:22.323715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 04:44:22.323745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 04:44:22.323766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 04:44:22.323780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:44:22.323794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:44:22.323824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:44:29.842369 | orchestrator | 2026-02-16 04:44:29.842469 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-16 04:44:29.842482 | orchestrator | Monday 16 February 2026 04:44:22 +0000 (0:00:01.646) 0:00:39.430 ******* 2026-02-16 04:44:29.842491 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:44:29.842501 | orchestrator | 2026-02-16 04:44:29.842509 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-16 04:44:29.842517 | orchestrator | Monday 16 February 2026 04:44:22 +0000 (0:00:00.144) 0:00:39.574 ******* 2026-02-16 04:44:29.842525 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:44:29.842533 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:44:29.842541 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:44:29.842549 | orchestrator | 2026-02-16 04:44:29.842557 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-16 04:44:29.842565 | orchestrator | Monday 16 February 2026 04:44:22 +0000 (0:00:00.291) 0:00:39.865 ******* 2026-02-16 04:44:29.842572 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 04:44:29.842580 | orchestrator | 2026-02-16 04:44:29.842588 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-16 04:44:29.842596 | orchestrator | Monday 16 February 2026 04:44:23 +0000 (0:00:00.859) 0:00:40.725 ******* 2026-02-16 04:44:29.842605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 04:44:29.842631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 04:44:29.842641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 04:44:29.842683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:44:29.842695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:44:29.842703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:44:29.842711 | orchestrator | 2026-02-16 04:44:29.842724 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-16 04:44:29.842732 | orchestrator | Monday 16 February 2026 04:44:25 +0000 (0:00:02.346) 0:00:43.071 ******* 2026-02-16 04:44:29.842740 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:44:29.842749 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:44:29.842757 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:44:29.842765 | orchestrator | 2026-02-16 04:44:29.842773 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-16 04:44:29.842781 | orchestrator | Monday 16 February 2026 04:44:26 +0000 (0:00:00.539) 0:00:43.611 ******* 2026-02-16 04:44:29.842789 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:44:29.842797 | orchestrator | 2026-02-16 04:44:29.842805 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-16 04:44:29.842819 | orchestrator | Monday 16 February 2026 04:44:27 +0000 (0:00:00.595) 0:00:44.207 ******* 2026-02-16 04:44:29.842828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 04:44:29.842844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 04:44:30.777717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 04:44:30.777862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:44:30.777882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:44:30.777917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:44:30.777930 | orchestrator | 2026-02-16 04:44:30.777943 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-16 04:44:30.777956 | orchestrator | Monday 16 February 2026 04:44:29 +0000 (0:00:02.757) 0:00:46.964 ******* 2026-02-16 04:44:30.777988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-16 04:44:30.778001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 04:44:30.778012 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:44:30.778152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-16 04:44:30.778189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 04:44:30.778209 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:44:30.778228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-16 04:44:30.778254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 04:44:34.412764 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:44:34.412913 | orchestrator | 2026-02-16 04:44:34.412934 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-16 04:44:34.412947 | orchestrator | Monday 16 February 2026 04:44:30 +0000 (0:00:00.925) 0:00:47.890 ******* 2026-02-16 04:44:34.412961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-16 04:44:34.412988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 04:44:34.413029 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:44:34.413042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-16 04:44:34.413054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 04:44:34.413066 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:44:34.413128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-16 04:44:34.413191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 04:44:34.413250 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:44:34.413263 | orchestrator | 2026-02-16 04:44:34.413276 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-16 04:44:34.413294 | orchestrator | Monday 16 February 2026 04:44:31 +0000 (0:00:01.000) 0:00:48.890 ******* 2026-02-16 04:44:34.413308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 04:44:34.413322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 04:44:34.413346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 04:44:40.584553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:44:40.584680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:44:40.584693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:44:40.584702 | orchestrator | 2026-02-16 04:44:40.584711 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-16 04:44:40.584719 | orchestrator | Monday 16 February 2026 04:44:34 +0000 (0:00:02.643) 0:00:51.534 ******* 2026-02-16 04:44:40.584727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 04:44:40.584748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 04:44:40.584756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 04:44:40.584773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:44:40.584781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:44:40.584788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:44:40.584795 | orchestrator | 2026-02-16 04:44:40.584802 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-16 04:44:40.584809 | orchestrator | Monday 16 February 2026 04:44:39 +0000 (0:00:05.491) 0:00:57.025 ******* 2026-02-16 04:44:40.584822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-16 04:44:42.518781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 04:44:42.518909 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:44:42.518945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-16 04:44:42.518960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 04:44:42.518972 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:44:42.518983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-16 04:44:42.519013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 04:44:42.519117 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:44:42.519133 | orchestrator | 2026-02-16 04:44:42.519146 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-16 04:44:42.519158 | orchestrator | Monday 16 February 2026 04:44:40 +0000 (0:00:00.682) 0:00:57.708 ******* 2026-02-16 04:44:42.519177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 04:44:42.519189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 04:44:42.519201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-16 04:44:42.519212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:44:42.519247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:45:34.521191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-16 04:45:34.521297 | orchestrator | 2026-02-16 04:45:34.521308 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-16 04:45:34.521317 | orchestrator | Monday 16 February 2026 04:44:42 +0000 (0:00:01.929) 0:00:59.638 ******* 2026-02-16 04:45:34.521324 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:45:34.521332 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:45:34.521338 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:45:34.521345 | orchestrator | 2026-02-16 04:45:34.521352 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-16 04:45:34.521358 | orchestrator | Monday 16 February 2026 04:44:42 +0000 (0:00:00.483) 0:01:00.121 ******* 2026-02-16 04:45:34.521365 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:45:34.521372 | orchestrator | 2026-02-16 04:45:34.521407 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-16 04:45:34.521414 | orchestrator | Monday 16 February 2026 04:44:45 +0000 (0:00:02.191) 0:01:02.312 ******* 2026-02-16 04:45:34.521421 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:45:34.521428 | orchestrator | 2026-02-16 04:45:34.521434 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-16 04:45:34.521441 | orchestrator | Monday 16 February 2026 04:44:47 +0000 (0:00:02.315) 0:01:04.628 ******* 2026-02-16 04:45:34.521448 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:45:34.521454 | orchestrator | 2026-02-16 04:45:34.521461 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-16 04:45:34.521467 | orchestrator | Monday 16 February 2026 04:45:04 +0000 (0:00:16.742) 0:01:21.371 ******* 2026-02-16 04:45:34.521474 | orchestrator | 2026-02-16 04:45:34.521480 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-16 04:45:34.521487 | orchestrator | Monday 16 February 2026 04:45:04 +0000 (0:00:00.101) 0:01:21.472 ******* 2026-02-16 04:45:34.521493 | orchestrator | 2026-02-16 04:45:34.521500 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-16 04:45:34.521506 | orchestrator | Monday 16 February 2026 04:45:04 +0000 (0:00:00.072) 0:01:21.545 ******* 2026-02-16 04:45:34.521512 | orchestrator | 2026-02-16 04:45:34.521523 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-16 04:45:34.521532 | orchestrator | Monday 16 February 2026 04:45:04 +0000 (0:00:00.072) 0:01:21.618 ******* 2026-02-16 04:45:34.521542 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:45:34.521578 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:45:34.521590 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:45:34.521600 | orchestrator | 2026-02-16 04:45:34.521611 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-16 04:45:34.521622 | orchestrator | Monday 16 February 2026 04:45:23 +0000 (0:00:19.028) 0:01:40.646 ******* 2026-02-16 04:45:34.521632 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:45:34.521641 | orchestrator | changed: [testbed-node-2] 2026-02-16 04:45:34.521652 | orchestrator | changed: [testbed-node-1] 2026-02-16 04:45:34.521662 | orchestrator | 2026-02-16 04:45:34.521672 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:45:34.521684 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 04:45:34.521696 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-16 04:45:34.521707 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-16 04:45:34.521718 | orchestrator | 2026-02-16 04:45:34.521728 | orchestrator | 2026-02-16 04:45:34.521738 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:45:34.521748 | orchestrator | Monday 16 February 2026 04:45:34 +0000 (0:00:10.665) 0:01:51.311 ******* 2026-02-16 04:45:34.521759 | orchestrator | =============================================================================== 2026-02-16 04:45:34.521771 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 19.03s 2026-02-16 04:45:34.521782 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.74s 2026-02-16 04:45:34.521793 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.67s 2026-02-16 04:45:34.521803 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.56s 2026-02-16 04:45:34.521810 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.49s 2026-02-16 04:45:34.521817 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.08s 2026-02-16 04:45:34.521824 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.07s 2026-02-16 04:45:34.521846 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.82s 2026-02-16 04:45:34.521857 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.81s 2026-02-16 04:45:34.521867 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.64s 2026-02-16 04:45:34.521877 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.48s 2026-02-16 04:45:34.521886 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.42s 2026-02-16 04:45:34.521895 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.18s 2026-02-16 04:45:34.521904 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.76s 2026-02-16 04:45:34.521914 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.64s 2026-02-16 04:45:34.521925 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.35s 2026-02-16 04:45:34.521936 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.32s 2026-02-16 04:45:34.521947 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.19s 2026-02-16 04:45:34.521958 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.93s 2026-02-16 04:45:34.521968 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.65s 2026-02-16 04:45:35.257703 | orchestrator | ok: Runtime: 1:43:36.745274 2026-02-16 04:45:35.520599 | 2026-02-16 04:45:35.520741 | TASK [Deploy in a nutshell] 2026-02-16 04:45:36.057368 | orchestrator | skipping: Conditional result was False 2026-02-16 04:45:36.080351 | 2026-02-16 04:45:36.080508 | TASK [Bootstrap services] 2026-02-16 04:45:36.805937 | orchestrator | 2026-02-16 04:45:36.806144 | orchestrator | # BOOTSTRAP 2026-02-16 04:45:36.806162 | orchestrator | 2026-02-16 04:45:36.806172 | orchestrator | + set -e 2026-02-16 04:45:36.806182 | orchestrator | + echo 2026-02-16 04:45:36.806192 | orchestrator | + echo '# BOOTSTRAP' 2026-02-16 04:45:36.806204 | orchestrator | + echo 2026-02-16 04:45:36.806238 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-16 04:45:36.814781 | orchestrator | + set -e 2026-02-16 04:45:36.815450 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-16 04:45:39.020749 | orchestrator | 2026-02-16 04:45:39 | INFO  | It takes a moment until task c4f7bc3c-af7e-4ac2-b7c0-b01af4f088aa (flavor-manager) has been started and output is visible here. 2026-02-16 04:45:47.461608 | orchestrator | 2026-02-16 04:45:42 | INFO  | Flavor SCS-1L-1 created 2026-02-16 04:45:47.461765 | orchestrator | 2026-02-16 04:45:42 | INFO  | Flavor SCS-1L-1-5 created 2026-02-16 04:45:47.461815 | orchestrator | 2026-02-16 04:45:43 | INFO  | Flavor SCS-1V-2 created 2026-02-16 04:45:47.461833 | orchestrator | 2026-02-16 04:45:43 | INFO  | Flavor SCS-1V-2-5 created 2026-02-16 04:45:47.461851 | orchestrator | 2026-02-16 04:45:43 | INFO  | Flavor SCS-1V-4 created 2026-02-16 04:45:47.461868 | orchestrator | 2026-02-16 04:45:43 | INFO  | Flavor SCS-1V-4-10 created 2026-02-16 04:45:47.461883 | orchestrator | 2026-02-16 04:45:43 | INFO  | Flavor SCS-1V-8 created 2026-02-16 04:45:47.461900 | orchestrator | 2026-02-16 04:45:43 | INFO  | Flavor SCS-1V-8-20 created 2026-02-16 04:45:47.461934 | orchestrator | 2026-02-16 04:45:44 | INFO  | Flavor SCS-2V-4 created 2026-02-16 04:45:47.461951 | orchestrator | 2026-02-16 04:45:44 | INFO  | Flavor SCS-2V-4-10 created 2026-02-16 04:45:47.461968 | orchestrator | 2026-02-16 04:45:44 | INFO  | Flavor SCS-2V-8 created 2026-02-16 04:45:47.461984 | orchestrator | 2026-02-16 04:45:44 | INFO  | Flavor SCS-2V-8-20 created 2026-02-16 04:45:47.462001 | orchestrator | 2026-02-16 04:45:44 | INFO  | Flavor SCS-2V-16 created 2026-02-16 04:45:47.462102 | orchestrator | 2026-02-16 04:45:44 | INFO  | Flavor SCS-2V-16-50 created 2026-02-16 04:45:47.462123 | orchestrator | 2026-02-16 04:45:44 | INFO  | Flavor SCS-4V-8 created 2026-02-16 04:45:47.462141 | orchestrator | 2026-02-16 04:45:45 | INFO  | Flavor SCS-4V-8-20 created 2026-02-16 04:45:47.462159 | orchestrator | 2026-02-16 04:45:45 | INFO  | Flavor SCS-4V-16 created 2026-02-16 04:45:47.462176 | orchestrator | 2026-02-16 04:45:45 | INFO  | Flavor SCS-4V-16-50 created 2026-02-16 04:45:47.462192 | orchestrator | 2026-02-16 04:45:45 | INFO  | Flavor SCS-4V-32 created 2026-02-16 04:45:47.462207 | orchestrator | 2026-02-16 04:45:45 | INFO  | Flavor SCS-4V-32-100 created 2026-02-16 04:45:47.462224 | orchestrator | 2026-02-16 04:45:45 | INFO  | Flavor SCS-8V-16 created 2026-02-16 04:45:47.462241 | orchestrator | 2026-02-16 04:45:46 | INFO  | Flavor SCS-8V-16-50 created 2026-02-16 04:45:47.462259 | orchestrator | 2026-02-16 04:45:46 | INFO  | Flavor SCS-8V-32 created 2026-02-16 04:45:47.462277 | orchestrator | 2026-02-16 04:45:46 | INFO  | Flavor SCS-8V-32-100 created 2026-02-16 04:45:47.462294 | orchestrator | 2026-02-16 04:45:46 | INFO  | Flavor SCS-16V-32 created 2026-02-16 04:45:47.462313 | orchestrator | 2026-02-16 04:45:46 | INFO  | Flavor SCS-16V-32-100 created 2026-02-16 04:45:47.462329 | orchestrator | 2026-02-16 04:45:46 | INFO  | Flavor SCS-2V-4-20s created 2026-02-16 04:45:47.462347 | orchestrator | 2026-02-16 04:45:47 | INFO  | Flavor SCS-4V-8-50s created 2026-02-16 04:45:47.462365 | orchestrator | 2026-02-16 04:45:47 | INFO  | Flavor SCS-8V-32-100s created 2026-02-16 04:45:49.803260 | orchestrator | 2026-02-16 04:45:49 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-16 04:46:00.001269 | orchestrator | 2026-02-16 04:45:59 | INFO  | Task c76b7168-b3d6-42f7-bdee-50690b7e5a5f (bootstrap-basic) was prepared for execution. 2026-02-16 04:46:00.001409 | orchestrator | 2026-02-16 04:45:59 | INFO  | It takes a moment until task c76b7168-b3d6-42f7-bdee-50690b7e5a5f (bootstrap-basic) has been started and output is visible here. 2026-02-16 04:46:42.599833 | orchestrator | 2026-02-16 04:46:42.599918 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-16 04:46:42.599928 | orchestrator | 2026-02-16 04:46:42.599935 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-16 04:46:42.599942 | orchestrator | Monday 16 February 2026 04:46:04 +0000 (0:00:00.070) 0:00:00.070 ******* 2026-02-16 04:46:42.599948 | orchestrator | ok: [localhost] 2026-02-16 04:46:42.599955 | orchestrator | 2026-02-16 04:46:42.599961 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-16 04:46:42.599967 | orchestrator | Monday 16 February 2026 04:46:06 +0000 (0:00:01.830) 0:00:01.900 ******* 2026-02-16 04:46:42.599973 | orchestrator | ok: [localhost] 2026-02-16 04:46:42.599979 | orchestrator | 2026-02-16 04:46:42.599985 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-16 04:46:42.599991 | orchestrator | Monday 16 February 2026 04:46:12 +0000 (0:00:06.655) 0:00:08.556 ******* 2026-02-16 04:46:42.599997 | orchestrator | changed: [localhost] 2026-02-16 04:46:42.600003 | orchestrator | 2026-02-16 04:46:42.600009 | orchestrator | TASK [Create public network] *************************************************** 2026-02-16 04:46:42.600015 | orchestrator | Monday 16 February 2026 04:46:19 +0000 (0:00:06.472) 0:00:15.029 ******* 2026-02-16 04:46:42.600021 | orchestrator | changed: [localhost] 2026-02-16 04:46:42.600027 | orchestrator | 2026-02-16 04:46:42.600033 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-16 04:46:42.600149 | orchestrator | Monday 16 February 2026 04:46:25 +0000 (0:00:05.618) 0:00:20.647 ******* 2026-02-16 04:46:42.600160 | orchestrator | changed: [localhost] 2026-02-16 04:46:42.600166 | orchestrator | 2026-02-16 04:46:42.600172 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-16 04:46:42.600178 | orchestrator | Monday 16 February 2026 04:46:31 +0000 (0:00:06.044) 0:00:26.692 ******* 2026-02-16 04:46:42.600184 | orchestrator | changed: [localhost] 2026-02-16 04:46:42.600190 | orchestrator | 2026-02-16 04:46:42.600196 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-16 04:46:42.600202 | orchestrator | Monday 16 February 2026 04:46:35 +0000 (0:00:04.300) 0:00:30.992 ******* 2026-02-16 04:46:42.600207 | orchestrator | changed: [localhost] 2026-02-16 04:46:42.600213 | orchestrator | 2026-02-16 04:46:42.600219 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-16 04:46:42.600233 | orchestrator | Monday 16 February 2026 04:46:39 +0000 (0:00:03.806) 0:00:34.799 ******* 2026-02-16 04:46:42.600239 | orchestrator | ok: [localhost] 2026-02-16 04:46:42.600245 | orchestrator | 2026-02-16 04:46:42.600250 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:46:42.600256 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 04:46:42.600263 | orchestrator | 2026-02-16 04:46:42.600269 | orchestrator | 2026-02-16 04:46:42.600275 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:46:42.600281 | orchestrator | Monday 16 February 2026 04:46:42 +0000 (0:00:03.244) 0:00:38.044 ******* 2026-02-16 04:46:42.600287 | orchestrator | =============================================================================== 2026-02-16 04:46:42.600293 | orchestrator | Get volume type LUKS ---------------------------------------------------- 6.66s 2026-02-16 04:46:42.600299 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.47s 2026-02-16 04:46:42.600305 | orchestrator | Set public network to default ------------------------------------------- 6.04s 2026-02-16 04:46:42.600311 | orchestrator | Create public network --------------------------------------------------- 5.62s 2026-02-16 04:46:42.600335 | orchestrator | Create public subnet ---------------------------------------------------- 4.30s 2026-02-16 04:46:42.600341 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.81s 2026-02-16 04:46:42.600347 | orchestrator | Create manager role ----------------------------------------------------- 3.25s 2026-02-16 04:46:42.600353 | orchestrator | Gathering Facts --------------------------------------------------------- 1.83s 2026-02-16 04:46:44.872545 | orchestrator | 2026-02-16 04:46:44 | INFO  | It takes a moment until task 9475cd6c-c480-49c4-a01c-b8c1e57e1658 (image-manager) has been started and output is visible here. 2026-02-16 04:47:28.839043 | orchestrator | 2026-02-16 04:46:47 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-16 04:47:28.839154 | orchestrator | 2026-02-16 04:46:47 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-16 04:47:28.839171 | orchestrator | 2026-02-16 04:46:47 | INFO  | Importing image Cirros 0.6.2 2026-02-16 04:47:28.839182 | orchestrator | 2026-02-16 04:46:47 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-16 04:47:28.839193 | orchestrator | 2026-02-16 04:46:50 | INFO  | Waiting for image to leave queued state... 2026-02-16 04:47:28.839204 | orchestrator | 2026-02-16 04:46:52 | INFO  | Waiting for import to complete... 2026-02-16 04:47:28.839213 | orchestrator | 2026-02-16 04:47:02 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-16 04:47:28.839224 | orchestrator | 2026-02-16 04:47:02 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-16 04:47:28.839234 | orchestrator | 2026-02-16 04:47:02 | INFO  | Setting internal_version = 0.6.2 2026-02-16 04:47:28.839244 | orchestrator | 2026-02-16 04:47:02 | INFO  | Setting image_original_user = cirros 2026-02-16 04:47:28.839254 | orchestrator | 2026-02-16 04:47:02 | INFO  | Adding tag os:cirros 2026-02-16 04:47:28.839264 | orchestrator | 2026-02-16 04:47:03 | INFO  | Setting property architecture: x86_64 2026-02-16 04:47:28.839274 | orchestrator | 2026-02-16 04:47:03 | INFO  | Setting property hw_disk_bus: scsi 2026-02-16 04:47:28.839284 | orchestrator | 2026-02-16 04:47:03 | INFO  | Setting property hw_rng_model: virtio 2026-02-16 04:47:28.839293 | orchestrator | 2026-02-16 04:47:04 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-16 04:47:28.839303 | orchestrator | 2026-02-16 04:47:04 | INFO  | Setting property hw_watchdog_action: reset 2026-02-16 04:47:28.839313 | orchestrator | 2026-02-16 04:47:04 | INFO  | Setting property hypervisor_type: qemu 2026-02-16 04:47:28.839323 | orchestrator | 2026-02-16 04:47:04 | INFO  | Setting property os_distro: cirros 2026-02-16 04:47:28.839333 | orchestrator | 2026-02-16 04:47:05 | INFO  | Setting property os_purpose: minimal 2026-02-16 04:47:28.839342 | orchestrator | 2026-02-16 04:47:05 | INFO  | Setting property replace_frequency: never 2026-02-16 04:47:28.839352 | orchestrator | 2026-02-16 04:47:05 | INFO  | Setting property uuid_validity: none 2026-02-16 04:47:28.839362 | orchestrator | 2026-02-16 04:47:05 | INFO  | Setting property provided_until: none 2026-02-16 04:47:28.839371 | orchestrator | 2026-02-16 04:47:06 | INFO  | Setting property image_description: Cirros 2026-02-16 04:47:28.839381 | orchestrator | 2026-02-16 04:47:06 | INFO  | Setting property image_name: Cirros 2026-02-16 04:47:28.839391 | orchestrator | 2026-02-16 04:47:06 | INFO  | Setting property internal_version: 0.6.2 2026-02-16 04:47:28.839400 | orchestrator | 2026-02-16 04:47:06 | INFO  | Setting property image_original_user: cirros 2026-02-16 04:47:28.839434 | orchestrator | 2026-02-16 04:47:07 | INFO  | Setting property os_version: 0.6.2 2026-02-16 04:47:28.839452 | orchestrator | 2026-02-16 04:47:07 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-16 04:47:28.839464 | orchestrator | 2026-02-16 04:47:07 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-16 04:47:28.839473 | orchestrator | 2026-02-16 04:47:08 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-16 04:47:28.839483 | orchestrator | 2026-02-16 04:47:08 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-16 04:47:28.839493 | orchestrator | 2026-02-16 04:47:08 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-16 04:47:28.839503 | orchestrator | 2026-02-16 04:47:08 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-16 04:47:28.839516 | orchestrator | 2026-02-16 04:47:08 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-16 04:47:28.839526 | orchestrator | 2026-02-16 04:47:08 | INFO  | Importing image Cirros 0.6.3 2026-02-16 04:47:28.839535 | orchestrator | 2026-02-16 04:47:08 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-16 04:47:28.839545 | orchestrator | 2026-02-16 04:47:09 | INFO  | Waiting for image to leave queued state... 2026-02-16 04:47:28.839554 | orchestrator | 2026-02-16 04:47:11 | INFO  | Waiting for import to complete... 2026-02-16 04:47:28.839581 | orchestrator | 2026-02-16 04:47:22 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-16 04:47:28.839591 | orchestrator | 2026-02-16 04:47:22 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-16 04:47:28.839601 | orchestrator | 2026-02-16 04:47:22 | INFO  | Setting internal_version = 0.6.3 2026-02-16 04:47:28.839610 | orchestrator | 2026-02-16 04:47:22 | INFO  | Setting image_original_user = cirros 2026-02-16 04:47:28.839619 | orchestrator | 2026-02-16 04:47:22 | INFO  | Adding tag os:cirros 2026-02-16 04:47:28.839629 | orchestrator | 2026-02-16 04:47:22 | INFO  | Setting property architecture: x86_64 2026-02-16 04:47:28.839638 | orchestrator | 2026-02-16 04:47:23 | INFO  | Setting property hw_disk_bus: scsi 2026-02-16 04:47:28.839648 | orchestrator | 2026-02-16 04:47:23 | INFO  | Setting property hw_rng_model: virtio 2026-02-16 04:47:28.839657 | orchestrator | 2026-02-16 04:47:23 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-16 04:47:28.839667 | orchestrator | 2026-02-16 04:47:24 | INFO  | Setting property hw_watchdog_action: reset 2026-02-16 04:47:28.839676 | orchestrator | 2026-02-16 04:47:24 | INFO  | Setting property hypervisor_type: qemu 2026-02-16 04:47:28.839686 | orchestrator | 2026-02-16 04:47:24 | INFO  | Setting property os_distro: cirros 2026-02-16 04:47:28.839698 | orchestrator | 2026-02-16 04:47:25 | INFO  | Setting property os_purpose: minimal 2026-02-16 04:47:28.839714 | orchestrator | 2026-02-16 04:47:25 | INFO  | Setting property replace_frequency: never 2026-02-16 04:47:28.839739 | orchestrator | 2026-02-16 04:47:25 | INFO  | Setting property uuid_validity: none 2026-02-16 04:47:28.839757 | orchestrator | 2026-02-16 04:47:25 | INFO  | Setting property provided_until: none 2026-02-16 04:47:28.839773 | orchestrator | 2026-02-16 04:47:26 | INFO  | Setting property image_description: Cirros 2026-02-16 04:47:28.839788 | orchestrator | 2026-02-16 04:47:26 | INFO  | Setting property image_name: Cirros 2026-02-16 04:47:28.839804 | orchestrator | 2026-02-16 04:47:26 | INFO  | Setting property internal_version: 0.6.3 2026-02-16 04:47:28.839831 | orchestrator | 2026-02-16 04:47:26 | INFO  | Setting property image_original_user: cirros 2026-02-16 04:47:28.839845 | orchestrator | 2026-02-16 04:47:27 | INFO  | Setting property os_version: 0.6.3 2026-02-16 04:47:28.839862 | orchestrator | 2026-02-16 04:47:27 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-16 04:47:28.839876 | orchestrator | 2026-02-16 04:47:27 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-16 04:47:28.839892 | orchestrator | 2026-02-16 04:47:27 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-16 04:47:28.839908 | orchestrator | 2026-02-16 04:47:27 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-16 04:47:28.839924 | orchestrator | 2026-02-16 04:47:27 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-16 04:47:29.160600 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-16 04:47:31.540313 | orchestrator | 2026-02-16 04:47:31 | INFO  | date: 2026-02-16 2026-02-16 04:47:31.540405 | orchestrator | 2026-02-16 04:47:31 | INFO  | image: octavia-amphora-haproxy-2024.2.20260216.qcow2 2026-02-16 04:47:31.540438 | orchestrator | 2026-02-16 04:47:31 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260216.qcow2 2026-02-16 04:47:31.540449 | orchestrator | 2026-02-16 04:47:31 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260216.qcow2.CHECKSUM 2026-02-16 04:47:31.842793 | orchestrator | 2026-02-16 04:47:31 | INFO  | checksum: 0ecbbe110f986f4323022ec1ad407fd35300b115ab581e30866c2035d6b19160 2026-02-16 04:47:31.918343 | orchestrator | 2026-02-16 04:47:31 | INFO  | It takes a moment until task 2c67f41b-02e2-4b9d-a899-9d6ba561d6a3 (image-manager) has been started and output is visible here. 2026-02-16 04:48:55.320170 | orchestrator | 2026-02-16 04:47:34 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-02-16' 2026-02-16 04:48:55.320279 | orchestrator | 2026-02-16 04:47:34 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260216.qcow2: 200 2026-02-16 04:48:55.320294 | orchestrator | 2026-02-16 04:47:34 | INFO  | Importing image OpenStack Octavia Amphora 2026-02-16 2026-02-16 04:48:55.320303 | orchestrator | 2026-02-16 04:47:34 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260216.qcow2 2026-02-16 04:48:55.320312 | orchestrator | 2026-02-16 04:47:35 | INFO  | Waiting for image to leave queued state... 2026-02-16 04:48:55.320319 | orchestrator | 2026-02-16 04:47:37 | INFO  | Waiting for import to complete... 2026-02-16 04:48:55.320326 | orchestrator | 2026-02-16 04:47:47 | INFO  | Waiting for import to complete... 2026-02-16 04:48:55.320333 | orchestrator | 2026-02-16 04:47:58 | INFO  | Waiting for import to complete... 2026-02-16 04:48:55.320341 | orchestrator | 2026-02-16 04:48:08 | INFO  | Waiting for import to complete... 2026-02-16 04:48:55.320351 | orchestrator | 2026-02-16 04:48:18 | INFO  | Waiting for import to complete... 2026-02-16 04:48:55.320359 | orchestrator | 2026-02-16 04:48:28 | INFO  | Waiting for import to complete... 2026-02-16 04:48:55.320367 | orchestrator | 2026-02-16 04:48:38 | INFO  | Waiting for import to complete... 2026-02-16 04:48:55.320375 | orchestrator | 2026-02-16 04:48:48 | INFO  | Import of 'OpenStack Octavia Amphora 2026-02-16' successfully completed, reloading images 2026-02-16 04:48:55.320384 | orchestrator | 2026-02-16 04:48:49 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-02-16' 2026-02-16 04:48:55.320413 | orchestrator | 2026-02-16 04:48:49 | INFO  | Setting internal_version = 2026-02-16 2026-02-16 04:48:55.320421 | orchestrator | 2026-02-16 04:48:49 | INFO  | Setting image_original_user = ubuntu 2026-02-16 04:48:55.320429 | orchestrator | 2026-02-16 04:48:49 | INFO  | Adding tag amphora 2026-02-16 04:48:55.320436 | orchestrator | 2026-02-16 04:48:49 | INFO  | Adding tag os:ubuntu 2026-02-16 04:48:55.320444 | orchestrator | 2026-02-16 04:48:49 | INFO  | Setting property architecture: x86_64 2026-02-16 04:48:55.320452 | orchestrator | 2026-02-16 04:48:50 | INFO  | Setting property hw_disk_bus: scsi 2026-02-16 04:48:55.320459 | orchestrator | 2026-02-16 04:48:50 | INFO  | Setting property hw_rng_model: virtio 2026-02-16 04:48:55.320468 | orchestrator | 2026-02-16 04:48:50 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-16 04:48:55.320476 | orchestrator | 2026-02-16 04:48:51 | INFO  | Setting property hw_watchdog_action: reset 2026-02-16 04:48:55.320483 | orchestrator | 2026-02-16 04:48:51 | INFO  | Setting property hypervisor_type: qemu 2026-02-16 04:48:55.320490 | orchestrator | 2026-02-16 04:48:51 | INFO  | Setting property os_distro: ubuntu 2026-02-16 04:48:55.320498 | orchestrator | 2026-02-16 04:48:51 | INFO  | Setting property replace_frequency: quarterly 2026-02-16 04:48:55.320504 | orchestrator | 2026-02-16 04:48:52 | INFO  | Setting property uuid_validity: last-1 2026-02-16 04:48:55.320512 | orchestrator | 2026-02-16 04:48:52 | INFO  | Setting property provided_until: none 2026-02-16 04:48:55.320519 | orchestrator | 2026-02-16 04:48:52 | INFO  | Setting property os_purpose: network 2026-02-16 04:48:55.320540 | orchestrator | 2026-02-16 04:48:52 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-02-16 04:48:55.320548 | orchestrator | 2026-02-16 04:48:53 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-02-16 04:48:55.320556 | orchestrator | 2026-02-16 04:48:53 | INFO  | Setting property internal_version: 2026-02-16 2026-02-16 04:48:55.320564 | orchestrator | 2026-02-16 04:48:53 | INFO  | Setting property image_original_user: ubuntu 2026-02-16 04:48:55.320572 | orchestrator | 2026-02-16 04:48:54 | INFO  | Setting property os_version: 2026-02-16 2026-02-16 04:48:55.320580 | orchestrator | 2026-02-16 04:48:54 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260216.qcow2 2026-02-16 04:48:55.320587 | orchestrator | 2026-02-16 04:48:54 | INFO  | Setting property image_build_date: 2026-02-16 2026-02-16 04:48:55.320594 | orchestrator | 2026-02-16 04:48:54 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-02-16' 2026-02-16 04:48:55.320617 | orchestrator | 2026-02-16 04:48:54 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-02-16' 2026-02-16 04:48:55.320626 | orchestrator | 2026-02-16 04:48:55 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-02-16 04:48:55.320634 | orchestrator | 2026-02-16 04:48:55 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-02-16 04:48:55.320642 | orchestrator | 2026-02-16 04:48:55 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-02-16 04:48:55.320650 | orchestrator | 2026-02-16 04:48:55 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-02-16 04:48:55.834401 | orchestrator | ok: Runtime: 0:03:19.303919 2026-02-16 04:48:55.880920 | 2026-02-16 04:48:55.881051 | TASK [Run checks] 2026-02-16 04:48:56.585000 | orchestrator | + set -e 2026-02-16 04:48:56.585142 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-16 04:48:56.585156 | orchestrator | ++ export INTERACTIVE=false 2026-02-16 04:48:56.585168 | orchestrator | ++ INTERACTIVE=false 2026-02-16 04:48:56.585176 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-16 04:48:56.585183 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-16 04:48:56.585191 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-16 04:48:56.586496 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-16 04:48:56.590799 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-16 04:48:56.590895 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-16 04:48:56.590911 | orchestrator | + echo 2026-02-16 04:48:56.590926 | orchestrator | 2026-02-16 04:48:56.590947 | orchestrator | # CHECK 2026-02-16 04:48:56.590965 | orchestrator | 2026-02-16 04:48:56.591043 | orchestrator | + echo '# CHECK' 2026-02-16 04:48:56.591063 | orchestrator | + echo 2026-02-16 04:48:56.591088 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-16 04:48:56.591825 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-16 04:48:56.652478 | orchestrator | 2026-02-16 04:48:56.652593 | orchestrator | ## Containers @ testbed-manager 2026-02-16 04:48:56.652619 | orchestrator | 2026-02-16 04:48:56.652657 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-16 04:48:56.652678 | orchestrator | + echo 2026-02-16 04:48:56.652694 | orchestrator | + echo '## Containers @ testbed-manager' 2026-02-16 04:48:56.652705 | orchestrator | + echo 2026-02-16 04:48:56.652717 | orchestrator | + osism container testbed-manager ps 2026-02-16 04:48:58.745742 | orchestrator | 2026-02-16 04:48:58 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-02-16 04:48:59.157533 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-16 04:48:59.157659 | orchestrator | a60ac7412258 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-02-16 04:48:59.157685 | orchestrator | 5df876ba0794 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-02-16 04:48:59.157698 | orchestrator | 6fc0f0c8d26c registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-16 04:48:59.157710 | orchestrator | 9189cfb3a23e registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-16 04:48:59.157722 | orchestrator | bab687bfde96 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-02-16 04:48:59.157739 | orchestrator | 25fd453e3e0c registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 58 minutes ago Up 57 minutes cephclient 2026-02-16 04:48:59.157751 | orchestrator | b54938c4bb9f registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-16 04:48:59.157762 | orchestrator | 0dbaf5f49ca1 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-16 04:48:59.157807 | orchestrator | 8466a87abb2a registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-16 04:48:59.157820 | orchestrator | 51c65441b83f registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-02-16 04:48:59.157831 | orchestrator | 07158c9c81e8 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-02-16 04:48:59.157843 | orchestrator | a84b2e11c70f registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-02-16 04:48:59.157854 | orchestrator | 19e5f419753e registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-02-16 04:48:59.157866 | orchestrator | 2599210a778b registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-02-16 04:48:59.157898 | orchestrator | 1b8a394aa47d registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-02-16 04:48:59.157919 | orchestrator | b01512cdff51 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-02-16 04:48:59.157931 | orchestrator | 9622f9df82b7 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-02-16 04:48:59.157942 | orchestrator | ac6af99ea9e8 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-02-16 04:48:59.157954 | orchestrator | 0d2850bc41ee registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-02-16 04:48:59.157965 | orchestrator | 63b09fcfffa0 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-02-16 04:48:59.158072 | orchestrator | d134686509c8 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-02-16 04:48:59.158105 | orchestrator | bd06b8dda32b registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-02-16 04:48:59.158139 | orchestrator | 8d01b8d12c61 registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-02-16 04:48:59.158158 | orchestrator | 8dda2bebbba4 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-02-16 04:48:59.158177 | orchestrator | 57573f93161f registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-02-16 04:48:59.158196 | orchestrator | 3ccba20b54f0 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-02-16 04:48:59.158215 | orchestrator | 5e8608a7ed4d registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-02-16 04:48:59.158234 | orchestrator | 5e779f7a7308 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-02-16 04:48:59.158253 | orchestrator | 017986acaa7a registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-02-16 04:48:59.158281 | orchestrator | cac21a4a37b5 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-02-16 04:48:59.501226 | orchestrator | 2026-02-16 04:48:59.501354 | orchestrator | ## Images @ testbed-manager 2026-02-16 04:48:59.501384 | orchestrator | 2026-02-16 04:48:59.501398 | orchestrator | + echo 2026-02-16 04:48:59.501412 | orchestrator | + echo '## Images @ testbed-manager' 2026-02-16 04:48:59.501426 | orchestrator | + echo 2026-02-16 04:48:59.501443 | orchestrator | + osism container testbed-manager images 2026-02-16 04:49:01.892858 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-16 04:49:01.892995 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 105606ff7489 25 hours ago 239MB 2026-02-16 04:49:01.893014 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 weeks ago 41.4MB 2026-02-16 04:49:01.893028 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 2 months ago 11.5MB 2026-02-16 04:49:01.893041 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 2 months ago 608MB 2026-02-16 04:49:01.893054 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-16 04:49:01.893067 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-16 04:49:01.893080 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-16 04:49:01.893095 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 2 months ago 308MB 2026-02-16 04:49:01.893108 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-16 04:49:01.893150 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 2 months ago 404MB 2026-02-16 04:49:01.893163 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 2 months ago 839MB 2026-02-16 04:49:01.893176 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-16 04:49:01.893188 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 2 months ago 330MB 2026-02-16 04:49:01.893201 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 2 months ago 613MB 2026-02-16 04:49:01.893213 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 2 months ago 560MB 2026-02-16 04:49:01.893226 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 2 months ago 1.23GB 2026-02-16 04:49:01.893238 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 2 months ago 383MB 2026-02-16 04:49:01.893251 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 2 months ago 238MB 2026-02-16 04:49:01.893264 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 3 months ago 334MB 2026-02-16 04:49:01.893277 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 4 months ago 742MB 2026-02-16 04:49:01.893290 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 5 months ago 275MB 2026-02-16 04:49:01.893303 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 6 months ago 226MB 2026-02-16 04:49:01.893315 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 9 months ago 453MB 2026-02-16 04:49:01.893328 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 20 months ago 146MB 2026-02-16 04:49:01.893341 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-02-16 04:49:02.222058 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-16 04:49:02.222187 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-16 04:49:02.285225 | orchestrator | 2026-02-16 04:49:02.285331 | orchestrator | ## Containers @ testbed-node-0 2026-02-16 04:49:02.285346 | orchestrator | 2026-02-16 04:49:02.285358 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-16 04:49:02.285369 | orchestrator | + echo 2026-02-16 04:49:02.285380 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-02-16 04:49:02.285392 | orchestrator | + echo 2026-02-16 04:49:02.285403 | orchestrator | + osism container testbed-node-0 ps 2026-02-16 04:49:04.721791 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-16 04:49:04.721896 | orchestrator | 17c4f7db077e registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-16 04:49:04.721933 | orchestrator | 32f35b44c293 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-16 04:49:04.721947 | orchestrator | 97444c9ef4a2 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-02-16 04:49:04.721960 | orchestrator | edada1bcd18b registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-16 04:49:04.722069 | orchestrator | 6ff406e33c90 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-16 04:49:04.722086 | orchestrator | 8287b494395a registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-16 04:49:04.722106 | orchestrator | 3aab9555631e registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-16 04:49:04.722118 | orchestrator | b1212ac69dde registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-16 04:49:04.722130 | orchestrator | 3154bb8443e4 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-16 04:49:04.722143 | orchestrator | d60d85a3b8e3 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-16 04:49:04.722155 | orchestrator | 288a67580928 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-16 04:49:04.722168 | orchestrator | 04157745b91a registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-16 04:49:04.722180 | orchestrator | b62a3eba2d15 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-16 04:49:04.722192 | orchestrator | 1ffe49fd345b registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-16 04:49:04.722205 | orchestrator | 64f4236afbc1 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-16 04:49:04.722218 | orchestrator | 3dea0c8b9c4c registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-16 04:49:04.722230 | orchestrator | 1629c4155656 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 19 minutes ceilometer_central 2026-02-16 04:49:04.722242 | orchestrator | d9d4fdb8f700 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-16 04:49:04.722255 | orchestrator | 5f1e1c5be088 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-16 04:49:04.722294 | orchestrator | 890043ef11f7 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-16 04:49:04.722308 | orchestrator | f2857083eb5f registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-16 04:49:04.722320 | orchestrator | 30ea37e8d017 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-02-16 04:49:04.722342 | orchestrator | 4db44586d808 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-16 04:49:04.722353 | orchestrator | 651b25f51198 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-16 04:49:04.722366 | orchestrator | 1dc731cf0dd4 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-16 04:49:04.722383 | orchestrator | 0a59bbff485a registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-16 04:49:04.722396 | orchestrator | a4b8986f1cac registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-02-16 04:49:04.722408 | orchestrator | b83fba39a6ae registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-16 04:49:04.722420 | orchestrator | ab343b532d3c registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-16 04:49:04.722433 | orchestrator | 97f922f6ad8a registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-16 04:49:04.722446 | orchestrator | 7416dac6540d registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-16 04:49:04.722459 | orchestrator | d97077bd1ec1 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-16 04:49:04.722471 | orchestrator | 5e0f0c5f236d registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-02-16 04:49:04.722484 | orchestrator | e55a135a9cfb registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-16 04:49:04.722496 | orchestrator | 318a85ca948e registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-16 04:49:04.722509 | orchestrator | 517640a126df registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-02-16 04:49:04.722522 | orchestrator | 2dcf05c69216 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-16 04:49:04.722535 | orchestrator | 88eeb497cb24 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-16 04:49:04.722547 | orchestrator | 2e7df383d965 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-16 04:49:04.722567 | orchestrator | cfe7e99e98f8 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-16 04:49:04.722588 | orchestrator | 5495db80e0ba registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-16 04:49:04.722602 | orchestrator | adc50b1fecdc registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-16 04:49:04.722620 | orchestrator | b33dfeee7385 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-16 04:49:04.722632 | orchestrator | 3e865516b84d registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-16 04:49:04.722644 | orchestrator | 0d29702fbb54 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-02-16 04:49:04.722657 | orchestrator | b2ffb67efe30 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-02-16 04:49:04.722668 | orchestrator | 74f109526363 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-02-16 04:49:04.722680 | orchestrator | 4e1ef0e4e04e registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-02-16 04:49:04.722712 | orchestrator | 6ed736973871 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-02-16 04:49:04.722723 | orchestrator | 0187b9514070 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-0 2026-02-16 04:49:04.722735 | orchestrator | 32ceb51db73c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-02-16 04:49:04.722746 | orchestrator | c4764146f42e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-02-16 04:49:04.722757 | orchestrator | 0dc57a36bf5a registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-16 04:49:04.722769 | orchestrator | 1abe0dbb00ea registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-16 04:49:04.722780 | orchestrator | 14430a170826 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-16 04:49:04.722791 | orchestrator | 93ba8ad877d7 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-16 04:49:04.722808 | orchestrator | 6825b2f1f068 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-16 04:49:04.722819 | orchestrator | 4f57f6b25218 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-16 04:49:04.722838 | orchestrator | 35036f5c7786 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-16 04:49:04.722856 | orchestrator | b0844f48909a registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-16 04:49:04.722867 | orchestrator | 4b42659cff79 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-16 04:49:04.722879 | orchestrator | 20b9283b2a68 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-16 04:49:04.722890 | orchestrator | afb4836c5494 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-16 04:49:04.722902 | orchestrator | 0e0b17ee2558 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-16 04:49:04.722913 | orchestrator | 3914557ed95b registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-16 04:49:04.722924 | orchestrator | 3734c11af2f8 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-16 04:49:04.722934 | orchestrator | 271b361dcd6b registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-02-16 04:49:04.722946 | orchestrator | 6b47a62c43dd registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-02-16 04:49:04.722959 | orchestrator | 89a6ef7e56f7 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-16 04:49:04.723016 | orchestrator | a97170afbd41 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-16 04:49:04.723030 | orchestrator | 9fdda7e16433 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-16 04:49:05.046885 | orchestrator | 2026-02-16 04:49:05.047020 | orchestrator | ## Images @ testbed-node-0 2026-02-16 04:49:05.047040 | orchestrator | 2026-02-16 04:49:05.047053 | orchestrator | + echo 2026-02-16 04:49:05.047066 | orchestrator | + echo '## Images @ testbed-node-0' 2026-02-16 04:49:05.047079 | orchestrator | + echo 2026-02-16 04:49:05.047090 | orchestrator | + osism container testbed-node-0 images 2026-02-16 04:49:07.575532 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-16 04:49:07.575673 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-16 04:49:07.575701 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-16 04:49:07.575722 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-16 04:49:07.575742 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-16 04:49:07.575793 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-16 04:49:07.575813 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-16 04:49:07.575831 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-16 04:49:07.575849 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-16 04:49:07.575868 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-16 04:49:07.575887 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-16 04:49:07.575906 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-16 04:49:07.575924 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-16 04:49:07.575943 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-16 04:49:07.575962 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-16 04:49:07.576011 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-16 04:49:07.576031 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-16 04:49:07.576049 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-16 04:49:07.576067 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-16 04:49:07.576087 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-16 04:49:07.576105 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-16 04:49:07.576123 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-16 04:49:07.576141 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-16 04:49:07.576160 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-16 04:49:07.576176 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-16 04:49:07.576192 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-16 04:49:07.576208 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-16 04:49:07.576225 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-16 04:49:07.576251 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-16 04:49:07.576269 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-16 04:49:07.576285 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-16 04:49:07.576317 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-16 04:49:07.576357 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-16 04:49:07.576374 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-16 04:49:07.576390 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-16 04:49:07.576407 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-16 04:49:07.576425 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-16 04:49:07.576443 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-16 04:49:07.576461 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-16 04:49:07.576478 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-16 04:49:07.576496 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-16 04:49:07.576512 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-16 04:49:07.576532 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-16 04:49:07.576550 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-16 04:49:07.576568 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-16 04:49:07.576586 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-16 04:49:07.576606 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-16 04:49:07.576626 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-16 04:49:07.576645 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-16 04:49:07.576664 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-16 04:49:07.576683 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-16 04:49:07.576700 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-16 04:49:07.576719 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-16 04:49:07.576737 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-16 04:49:07.576756 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-16 04:49:07.576776 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-16 04:49:07.576793 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-16 04:49:07.576824 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-16 04:49:07.576836 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-16 04:49:07.576854 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-16 04:49:07.576866 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-16 04:49:07.576877 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-16 04:49:07.576888 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-16 04:49:07.576899 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-16 04:49:07.576923 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-16 04:49:07.576934 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-16 04:49:07.576945 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-16 04:49:07.576956 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-16 04:49:07.577013 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-16 04:49:07.577034 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-16 04:49:07.906485 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-16 04:49:07.906911 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-16 04:49:07.962567 | orchestrator | 2026-02-16 04:49:07.962658 | orchestrator | ## Containers @ testbed-node-1 2026-02-16 04:49:07.962679 | orchestrator | 2026-02-16 04:49:07.962692 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-16 04:49:07.962703 | orchestrator | + echo 2026-02-16 04:49:07.962715 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-02-16 04:49:07.962727 | orchestrator | + echo 2026-02-16 04:49:07.962739 | orchestrator | + osism container testbed-node-1 ps 2026-02-16 04:49:10.434349 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-16 04:49:10.434443 | orchestrator | be29e15f1642 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-16 04:49:10.434515 | orchestrator | a1f8e00608b5 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-16 04:49:10.434536 | orchestrator | 4b681a903fa5 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-16 04:49:10.434552 | orchestrator | dd90a5287c71 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-16 04:49:10.434569 | orchestrator | 9780b8e2a037 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-16 04:49:10.434584 | orchestrator | 76fe17d11040 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-16 04:49:10.434626 | orchestrator | e1c6efc4d432 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-16 04:49:10.434637 | orchestrator | 83a5e5958a7d registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-16 04:49:10.434646 | orchestrator | 6dbe5a0b8185 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-16 04:49:10.434655 | orchestrator | 93be2ec9ced1 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-02-16 04:49:10.434664 | orchestrator | b56ced8d45cf registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-16 04:49:10.434673 | orchestrator | 767d73c90ed4 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-16 04:49:10.434697 | orchestrator | c90fac9ca92b registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-16 04:49:10.434706 | orchestrator | a439d0440c2c registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-16 04:49:10.434715 | orchestrator | 0750e88d7217 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-16 04:49:10.434724 | orchestrator | e3058341e9f7 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-16 04:49:10.434732 | orchestrator | 3e7ebbaaeae3 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-16 04:49:10.434741 | orchestrator | a2e355078996 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-16 04:49:10.434750 | orchestrator | cf18885bcaab registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-16 04:49:10.434775 | orchestrator | f8cc45b416ba registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-16 04:49:10.434785 | orchestrator | 3662e5286a50 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-16 04:49:10.434794 | orchestrator | 799f43579c12 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-02-16 04:49:10.434802 | orchestrator | deaf6fc7d7c6 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-16 04:49:10.434811 | orchestrator | ab28b1409399 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-16 04:49:10.434826 | orchestrator | 762de3b2a6e2 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-16 04:49:10.434834 | orchestrator | 98655902c80f registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-16 04:49:10.434843 | orchestrator | 2c35f6d58fd4 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-02-16 04:49:10.434852 | orchestrator | 9b429734188a registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-16 04:49:10.434861 | orchestrator | e8948f978390 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-16 04:49:10.434870 | orchestrator | 1e34c5a885e8 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-16 04:49:10.434878 | orchestrator | 1bfb11f06fbf registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-16 04:49:10.434963 | orchestrator | bfc0e996fba7 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-16 04:49:10.435054 | orchestrator | 0d79e3a319a6 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-02-16 04:49:10.435064 | orchestrator | 9f409a3eabc0 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-16 04:49:10.435074 | orchestrator | 280b546a38c5 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-16 04:49:10.435084 | orchestrator | ad1e28f65273 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-16 04:49:10.435103 | orchestrator | 51b479c18140 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) glance_api 2026-02-16 04:49:10.435114 | orchestrator | 244e90fad2a4 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-16 04:49:10.435123 | orchestrator | 3b70c0e4f9f0 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-16 04:49:10.435134 | orchestrator | baec9d67e577 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-16 04:49:10.437025 | orchestrator | 9f1d89fdc923 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-16 04:49:10.437090 | orchestrator | 2fd7e511f75b registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-16 04:49:10.437102 | orchestrator | f32b0b64b504 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-16 04:49:10.437110 | orchestrator | 17bec4e7809d registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-16 04:49:10.437119 | orchestrator | 64d8699a8200 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-02-16 04:49:10.437128 | orchestrator | 735319a75a2b registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-02-16 04:49:10.437137 | orchestrator | 07b5e1240f59 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-02-16 04:49:10.437145 | orchestrator | 47595131edcb registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-02-16 04:49:10.437154 | orchestrator | f1c062eafd31 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-02-16 04:49:10.437165 | orchestrator | 945a4c23c9cb registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-1 2026-02-16 04:49:10.437174 | orchestrator | f9f7d0e9f856 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-02-16 04:49:10.437183 | orchestrator | 8a5d26661ef8 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-02-16 04:49:10.437192 | orchestrator | 80b656edf251 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-16 04:49:10.437201 | orchestrator | eac022e439b1 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-16 04:49:10.437210 | orchestrator | 3d7f5f3507a4 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-16 04:49:10.437218 | orchestrator | a50e13767913 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-16 04:49:10.437227 | orchestrator | fda92a596eba registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-16 04:49:10.437236 | orchestrator | 875064e96fea registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-16 04:49:10.437245 | orchestrator | 6e82cb50e022 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-16 04:49:10.437259 | orchestrator | c382d1687784 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-16 04:49:10.437286 | orchestrator | f0d828638559 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-16 04:49:10.437295 | orchestrator | cf39bc653441 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-16 04:49:10.437304 | orchestrator | 61ffb13d592b registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-16 04:49:10.437313 | orchestrator | 4b3a1a0e24d1 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-16 04:49:10.437328 | orchestrator | 3ae068920d65 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-16 04:49:10.437337 | orchestrator | 64c57b8a23c2 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-16 04:49:10.437346 | orchestrator | 8576c5b22718 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-02-16 04:49:10.437354 | orchestrator | 97bd8d2146f0 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-02-16 04:49:10.437363 | orchestrator | 5edad56af037 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-16 04:49:10.437376 | orchestrator | 06a2810eabf2 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-16 04:49:10.437385 | orchestrator | cf3e651ad306 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-16 04:49:10.784938 | orchestrator | 2026-02-16 04:49:10.785072 | orchestrator | ## Images @ testbed-node-1 2026-02-16 04:49:10.785090 | orchestrator | 2026-02-16 04:49:10.785103 | orchestrator | + echo 2026-02-16 04:49:10.785114 | orchestrator | + echo '## Images @ testbed-node-1' 2026-02-16 04:49:10.785127 | orchestrator | + echo 2026-02-16 04:49:10.785138 | orchestrator | + osism container testbed-node-1 images 2026-02-16 04:49:13.268451 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-16 04:49:13.268657 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-16 04:49:13.268692 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-16 04:49:13.268713 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-16 04:49:13.268733 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-16 04:49:13.268752 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-16 04:49:13.268763 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-16 04:49:13.268800 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-16 04:49:13.268812 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-16 04:49:13.268823 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-16 04:49:13.268834 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-16 04:49:13.268844 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-16 04:49:13.268855 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-16 04:49:13.268866 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-16 04:49:13.268876 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-16 04:49:13.268887 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-16 04:49:13.268898 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-16 04:49:13.268908 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-16 04:49:13.268919 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-16 04:49:13.268929 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-16 04:49:13.268940 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-16 04:49:13.268950 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-16 04:49:13.268961 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-16 04:49:13.269032 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-16 04:49:13.269045 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-16 04:49:13.269057 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-16 04:49:13.269069 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-16 04:49:13.269081 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-16 04:49:13.269094 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-16 04:49:13.269106 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-16 04:49:13.269119 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-16 04:49:13.269131 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-16 04:49:13.269164 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-16 04:49:13.269186 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-16 04:49:13.269199 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-16 04:49:13.269211 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-16 04:49:13.269224 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-16 04:49:13.269236 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-16 04:49:13.269268 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-16 04:49:13.269281 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-16 04:49:13.269295 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-16 04:49:13.269318 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-16 04:49:13.269346 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-16 04:49:13.269363 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-16 04:49:13.269381 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-16 04:49:13.269397 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-16 04:49:13.269413 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-16 04:49:13.269431 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-16 04:49:13.269450 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-16 04:49:13.269468 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-16 04:49:13.269488 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-16 04:49:13.269506 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-16 04:49:13.269524 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-16 04:49:13.269542 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-16 04:49:13.269553 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-16 04:49:13.269564 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-16 04:49:13.269575 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-16 04:49:13.269585 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-16 04:49:13.269596 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-16 04:49:13.269607 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-16 04:49:13.269628 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-16 04:49:13.269639 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-16 04:49:13.269649 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-16 04:49:13.269660 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-16 04:49:13.269682 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-16 04:49:13.269693 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-16 04:49:13.269704 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-16 04:49:13.269714 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-16 04:49:13.269725 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-16 04:49:13.269736 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-16 04:49:13.633834 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-16 04:49:13.634564 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-16 04:49:13.680783 | orchestrator | 2026-02-16 04:49:13.680852 | orchestrator | ## Containers @ testbed-node-2 2026-02-16 04:49:13.680861 | orchestrator | 2026-02-16 04:49:13.680868 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-16 04:49:13.680875 | orchestrator | + echo 2026-02-16 04:49:13.680882 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-02-16 04:49:13.680891 | orchestrator | + echo 2026-02-16 04:49:13.680898 | orchestrator | + osism container testbed-node-2 ps 2026-02-16 04:49:16.217684 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-16 04:49:16.217775 | orchestrator | 883b22031b61 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-16 04:49:16.217787 | orchestrator | 5924efb73176 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-16 04:49:16.217796 | orchestrator | d70fcfb8c79a registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-16 04:49:16.217804 | orchestrator | 581b568c45ea registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-16 04:49:16.217814 | orchestrator | 3afa85a58021 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-16 04:49:16.217822 | orchestrator | 0c744b7e9eba registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-16 04:49:16.217829 | orchestrator | 2d0b2e58d487 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-16 04:49:16.217837 | orchestrator | 042937d0a33d registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-16 04:49:16.217864 | orchestrator | 1b91d391319a registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-16 04:49:16.217872 | orchestrator | cdd0d4c681d8 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-02-16 04:49:16.217879 | orchestrator | 28de65d5a725 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-16 04:49:16.217885 | orchestrator | 054bdc23aa9f registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-16 04:49:16.217910 | orchestrator | 35707798fead registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-16 04:49:16.217917 | orchestrator | 0e35dc9041fb registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-16 04:49:16.217924 | orchestrator | e1feb7b0be08 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-16 04:49:16.217931 | orchestrator | e9133c432867 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-16 04:49:16.217937 | orchestrator | 1aecb0716c76 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-16 04:49:16.217944 | orchestrator | 0e9f1ff4f01c registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-16 04:49:16.217951 | orchestrator | 50d6df5fa91f registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-16 04:49:16.218065 | orchestrator | 019e0494c73f registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-16 04:49:16.218076 | orchestrator | 43e221aef56f registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-16 04:49:16.218083 | orchestrator | 9d3a4d41b331 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-16 04:49:16.218089 | orchestrator | 23adaf7b91e0 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-16 04:49:16.218096 | orchestrator | 671a06f49763 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-16 04:49:16.218103 | orchestrator | 3e59ba216aa5 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-16 04:49:16.218116 | orchestrator | fe8a9f265ef7 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-16 04:49:16.218122 | orchestrator | eaf654a155fa registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-16 04:49:16.218129 | orchestrator | 8007fb2ead6c registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-16 04:49:16.218136 | orchestrator | fdd751cb20e8 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-16 04:49:16.218142 | orchestrator | d681605864f4 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-16 04:49:16.218149 | orchestrator | 93329e4e8d55 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-16 04:49:16.218156 | orchestrator | 0a455888082e registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-16 04:49:16.218163 | orchestrator | fad1f71f4303 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-16 04:49:16.218169 | orchestrator | 4e6b551eb93b registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-16 04:49:16.218176 | orchestrator | b54f34c89b79 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-16 04:49:16.218182 | orchestrator | 9da1f34691a3 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-16 04:49:16.218189 | orchestrator | 2ab2eb173ab7 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-16 04:49:16.218195 | orchestrator | 314f7a1bfb5c registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-16 04:49:16.218202 | orchestrator | a4b6d7d583db registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-16 04:49:16.218214 | orchestrator | c6c9a4fd6dc9 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-16 04:49:16.218221 | orchestrator | 8f367559c685 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-16 04:49:16.218227 | orchestrator | 1e074a8a6b0c registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-16 04:49:16.218234 | orchestrator | e25cad168508 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-02-16 04:49:16.218245 | orchestrator | 043928044b2f registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-16 04:49:16.218252 | orchestrator | 9ef021b67556 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-02-16 04:49:16.218258 | orchestrator | 82fd1a74bf95 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-02-16 04:49:16.218265 | orchestrator | 1d2ae891ed4c registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-02-16 04:49:16.218272 | orchestrator | 259b5791b224 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-02-16 04:49:16.218278 | orchestrator | 242d7d357ca8 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-02-16 04:49:16.218285 | orchestrator | 3f49eb57773e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-2 2026-02-16 04:49:16.218292 | orchestrator | efaaf1f74054 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-02-16 04:49:16.218304 | orchestrator | 6720fcec1b21 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-02-16 04:49:16.218310 | orchestrator | 5f3c798a66a9 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-16 04:49:16.218320 | orchestrator | 892b7b518a5c registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-16 04:49:16.218327 | orchestrator | 6df27040d20a registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-16 04:49:16.218334 | orchestrator | 3bc8f0f53fae registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-16 04:49:16.218341 | orchestrator | 56fe23cca280 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-16 04:49:16.218347 | orchestrator | d79c789c4fa3 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-16 04:49:16.218354 | orchestrator | fdbac9270f8c registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-16 04:49:16.218364 | orchestrator | 7d078c4cec0b registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-16 04:49:16.218371 | orchestrator | 3c6eb2f32c90 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-16 04:49:16.218382 | orchestrator | 864a02a23f75 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-16 04:49:16.218389 | orchestrator | 0c76a9ccc5bb registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-16 04:49:16.218396 | orchestrator | 9ba209958929 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-16 04:49:16.218402 | orchestrator | df0de313c7c7 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-16 04:49:16.218409 | orchestrator | 61fed0b4f128 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-16 04:49:16.218415 | orchestrator | d3bebb6e9f8d registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-02-16 04:49:16.218422 | orchestrator | 8b4dee036b17 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-02-16 04:49:16.218429 | orchestrator | 435a16a8d1e3 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-16 04:49:16.218435 | orchestrator | 776a0beac077 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-16 04:49:16.218442 | orchestrator | 784e1dcf4b8f registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-16 04:49:16.584540 | orchestrator | 2026-02-16 04:49:16.584654 | orchestrator | ## Images @ testbed-node-2 2026-02-16 04:49:16.584675 | orchestrator | 2026-02-16 04:49:16.584692 | orchestrator | + echo 2026-02-16 04:49:16.584709 | orchestrator | + echo '## Images @ testbed-node-2' 2026-02-16 04:49:16.584726 | orchestrator | + echo 2026-02-16 04:49:16.584743 | orchestrator | + osism container testbed-node-2 images 2026-02-16 04:49:19.068311 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-16 04:49:19.068404 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-16 04:49:19.068414 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-16 04:49:19.068421 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-16 04:49:19.068442 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-16 04:49:19.068448 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-16 04:49:19.068455 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-16 04:49:19.068461 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-16 04:49:19.068468 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-16 04:49:19.068491 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-16 04:49:19.068498 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-16 04:49:19.068507 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-16 04:49:19.068513 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-16 04:49:19.068520 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-16 04:49:19.068526 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-16 04:49:19.068532 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-16 04:49:19.068538 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-16 04:49:19.068544 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-16 04:49:19.068549 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-16 04:49:19.068555 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-16 04:49:19.068561 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-16 04:49:19.068567 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-16 04:49:19.068574 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-16 04:49:19.068580 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-16 04:49:19.068587 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-16 04:49:19.068593 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-16 04:49:19.068598 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-16 04:49:19.068604 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-16 04:49:19.068610 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-16 04:49:19.068617 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-16 04:49:19.068623 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-16 04:49:19.068630 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-16 04:49:19.068652 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-16 04:49:19.068659 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-16 04:49:19.068666 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-16 04:49:19.068671 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-16 04:49:19.068686 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-16 04:49:19.068692 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-16 04:49:19.068698 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-16 04:49:19.068713 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-16 04:49:19.068720 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-16 04:49:19.068726 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-16 04:49:19.068733 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-16 04:49:19.068739 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-16 04:49:19.068745 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-16 04:49:19.068751 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-16 04:49:19.068757 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-16 04:49:19.068763 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-16 04:49:19.068769 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-16 04:49:19.068775 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-16 04:49:19.068782 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-16 04:49:19.068788 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-16 04:49:19.068794 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-16 04:49:19.068801 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-16 04:49:19.068808 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-16 04:49:19.068814 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-16 04:49:19.068821 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-16 04:49:19.068828 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-16 04:49:19.068835 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-16 04:49:19.068842 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-16 04:49:19.068849 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-16 04:49:19.068855 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-16 04:49:19.068867 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-16 04:49:19.068874 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-16 04:49:19.068888 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-16 04:49:19.068896 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-16 04:49:19.068902 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-16 04:49:19.068909 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-16 04:49:19.068920 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-16 04:49:19.068926 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-16 04:49:19.414289 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-02-16 04:49:19.421634 | orchestrator | + set -e 2026-02-16 04:49:19.421729 | orchestrator | + source /opt/manager-vars.sh 2026-02-16 04:49:19.421755 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-16 04:49:19.421776 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-16 04:49:19.421796 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-16 04:49:19.421815 | orchestrator | ++ CEPH_VERSION=reef 2026-02-16 04:49:19.421867 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-16 04:49:19.421887 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-16 04:49:19.421904 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-16 04:49:19.421922 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-16 04:49:19.421941 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-16 04:49:19.421959 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-16 04:49:19.422156 | orchestrator | ++ export ARA=false 2026-02-16 04:49:19.422174 | orchestrator | ++ ARA=false 2026-02-16 04:49:19.422195 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-16 04:49:19.422215 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-16 04:49:19.422234 | orchestrator | ++ export TEMPEST=false 2026-02-16 04:49:19.422253 | orchestrator | ++ TEMPEST=false 2026-02-16 04:49:19.422272 | orchestrator | ++ export IS_ZUUL=true 2026-02-16 04:49:19.422292 | orchestrator | ++ IS_ZUUL=true 2026-02-16 04:49:19.422312 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 04:49:19.422332 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 04:49:19.422352 | orchestrator | ++ export EXTERNAL_API=false 2026-02-16 04:49:19.422372 | orchestrator | ++ EXTERNAL_API=false 2026-02-16 04:49:19.422391 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-16 04:49:19.422411 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-16 04:49:19.422431 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-16 04:49:19.422450 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-16 04:49:19.422471 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-16 04:49:19.422493 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-16 04:49:19.422512 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-16 04:49:19.422532 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-02-16 04:49:19.432824 | orchestrator | + set -e 2026-02-16 04:49:19.432873 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-16 04:49:19.432884 | orchestrator | ++ export INTERACTIVE=false 2026-02-16 04:49:19.432895 | orchestrator | ++ INTERACTIVE=false 2026-02-16 04:49:19.432905 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-16 04:49:19.432914 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-16 04:49:19.432924 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-16 04:49:19.433777 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-16 04:49:19.437339 | orchestrator | 2026-02-16 04:49:19.437429 | orchestrator | # Ceph status 2026-02-16 04:49:19.437444 | orchestrator | 2026-02-16 04:49:19.437455 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-16 04:49:19.437467 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-16 04:49:19.437478 | orchestrator | + echo 2026-02-16 04:49:19.437489 | orchestrator | + echo '# Ceph status' 2026-02-16 04:49:19.437535 | orchestrator | + echo 2026-02-16 04:49:19.437546 | orchestrator | + ceph -s 2026-02-16 04:49:20.032660 | orchestrator | cluster: 2026-02-16 04:49:20.032735 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-02-16 04:49:20.032743 | orchestrator | health: HEALTH_OK 2026-02-16 04:49:20.032748 | orchestrator | 2026-02-16 04:49:20.032753 | orchestrator | services: 2026-02-16 04:49:20.032758 | orchestrator | mon: 3 daemons, quorum testbed-node-1,testbed-node-0,testbed-node-2 (age 68m) 2026-02-16 04:49:20.032765 | orchestrator | mgr: testbed-node-1(active, since 56m), standbys: testbed-node-0, testbed-node-2 2026-02-16 04:49:20.032770 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-02-16 04:49:20.032776 | orchestrator | osd: 6 osds: 6 up (since 65m), 6 in (since 65m) 2026-02-16 04:49:20.032781 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-02-16 04:49:20.032785 | orchestrator | 2026-02-16 04:49:20.032790 | orchestrator | data: 2026-02-16 04:49:20.032795 | orchestrator | volumes: 1/1 healthy 2026-02-16 04:49:20.032799 | orchestrator | pools: 14 pools, 401 pgs 2026-02-16 04:49:20.032804 | orchestrator | objects: 556 objects, 2.2 GiB 2026-02-16 04:49:20.032808 | orchestrator | usage: 7.0 GiB used, 113 GiB / 120 GiB avail 2026-02-16 04:49:20.032813 | orchestrator | pgs: 401 active+clean 2026-02-16 04:49:20.032817 | orchestrator | 2026-02-16 04:49:20.077642 | orchestrator | 2026-02-16 04:49:20.077720 | orchestrator | # Ceph versions 2026-02-16 04:49:20.077727 | orchestrator | 2026-02-16 04:49:20.077733 | orchestrator | + echo 2026-02-16 04:49:20.077741 | orchestrator | + echo '# Ceph versions' 2026-02-16 04:49:20.077750 | orchestrator | + echo 2026-02-16 04:49:20.077758 | orchestrator | + ceph versions 2026-02-16 04:49:20.676178 | orchestrator | { 2026-02-16 04:49:20.676287 | orchestrator | "mon": { 2026-02-16 04:49:20.676302 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-16 04:49:20.676313 | orchestrator | }, 2026-02-16 04:49:20.676322 | orchestrator | "mgr": { 2026-02-16 04:49:20.676332 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-16 04:49:20.676341 | orchestrator | }, 2026-02-16 04:49:20.676350 | orchestrator | "osd": { 2026-02-16 04:49:20.676359 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-02-16 04:49:20.676367 | orchestrator | }, 2026-02-16 04:49:20.676376 | orchestrator | "mds": { 2026-02-16 04:49:20.676385 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-16 04:49:20.676393 | orchestrator | }, 2026-02-16 04:49:20.676402 | orchestrator | "rgw": { 2026-02-16 04:49:20.676411 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-16 04:49:20.676419 | orchestrator | }, 2026-02-16 04:49:20.676428 | orchestrator | "overall": { 2026-02-16 04:49:20.676437 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-02-16 04:49:20.676446 | orchestrator | } 2026-02-16 04:49:20.676455 | orchestrator | } 2026-02-16 04:49:20.724448 | orchestrator | 2026-02-16 04:49:20.724541 | orchestrator | # Ceph OSD tree 2026-02-16 04:49:20.724551 | orchestrator | 2026-02-16 04:49:20.724561 | orchestrator | + echo 2026-02-16 04:49:20.724569 | orchestrator | + echo '# Ceph OSD tree' 2026-02-16 04:49:20.724578 | orchestrator | + echo 2026-02-16 04:49:20.724587 | orchestrator | + ceph osd df tree 2026-02-16 04:49:21.268177 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-02-16 04:49:21.268283 | orchestrator | -1 0.11691 - 120 GiB 7.0 GiB 6.7 GiB 6 KiB 382 MiB 113 GiB 5.88 1.00 - root default 2026-02-16 04:49:21.268298 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 127 MiB 38 GiB 5.88 1.00 - host testbed-node-3 2026-02-16 04:49:21.268309 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 936 MiB 875 MiB 1 KiB 62 MiB 19 GiB 4.58 0.78 189 up osd.0 2026-02-16 04:49:21.268319 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 66 MiB 19 GiB 7.18 1.22 201 up osd.3 2026-02-16 04:49:21.268329 | orchestrator | -5 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 127 MiB 38 GiB 5.88 1.00 - host testbed-node-4 2026-02-16 04:49:21.268339 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 66 MiB 19 GiB 5.42 0.92 190 up osd.1 2026-02-16 04:49:21.268373 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 62 MiB 19 GiB 6.34 1.08 202 up osd.4 2026-02-16 04:49:21.268383 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 127 MiB 38 GiB 5.88 1.00 - host testbed-node-5 2026-02-16 04:49:21.268394 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 62 MiB 19 GiB 6.45 1.10 191 up osd.2 2026-02-16 04:49:21.268404 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1019 MiB 1 KiB 66 MiB 19 GiB 5.30 0.90 197 up osd.5 2026-02-16 04:49:21.268414 | orchestrator | TOTAL 120 GiB 7.0 GiB 6.7 GiB 9.3 KiB 382 MiB 113 GiB 5.88 2026-02-16 04:49:21.268424 | orchestrator | MIN/MAX VAR: 0.78/1.22 STDDEV: 0.86 2026-02-16 04:49:21.316443 | orchestrator | 2026-02-16 04:49:21.316521 | orchestrator | # Ceph monitor status 2026-02-16 04:49:21.316530 | orchestrator | 2026-02-16 04:49:21.316536 | orchestrator | + echo 2026-02-16 04:49:21.316543 | orchestrator | + echo '# Ceph monitor status' 2026-02-16 04:49:21.316550 | orchestrator | + echo 2026-02-16 04:49:21.316557 | orchestrator | + ceph mon stat 2026-02-16 04:49:21.894781 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.8:3300/0,v1:192.168.16.8:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 10, leader 0 testbed-node-1, quorum 0,1,2 testbed-node-1,testbed-node-0,testbed-node-2 2026-02-16 04:49:21.943004 | orchestrator | 2026-02-16 04:49:21.943120 | orchestrator | # Ceph quorum status 2026-02-16 04:49:21.943146 | orchestrator | 2026-02-16 04:49:21.943158 | orchestrator | + echo 2026-02-16 04:49:21.943168 | orchestrator | + echo '# Ceph quorum status' 2026-02-16 04:49:21.943178 | orchestrator | + echo 2026-02-16 04:49:21.943188 | orchestrator | + ceph quorum_status 2026-02-16 04:49:21.943198 | orchestrator | + jq 2026-02-16 04:49:22.601600 | orchestrator | { 2026-02-16 04:49:22.601696 | orchestrator | "election_epoch": 10, 2026-02-16 04:49:22.601712 | orchestrator | "quorum": [ 2026-02-16 04:49:22.601724 | orchestrator | 0, 2026-02-16 04:49:22.601736 | orchestrator | 1, 2026-02-16 04:49:22.601747 | orchestrator | 2 2026-02-16 04:49:22.601757 | orchestrator | ], 2026-02-16 04:49:22.601768 | orchestrator | "quorum_names": [ 2026-02-16 04:49:22.601779 | orchestrator | "testbed-node-1", 2026-02-16 04:49:22.601790 | orchestrator | "testbed-node-0", 2026-02-16 04:49:22.601801 | orchestrator | "testbed-node-2" 2026-02-16 04:49:22.601812 | orchestrator | ], 2026-02-16 04:49:22.601823 | orchestrator | "quorum_leader_name": "testbed-node-1", 2026-02-16 04:49:22.601835 | orchestrator | "quorum_age": 4123, 2026-02-16 04:49:22.601845 | orchestrator | "features": { 2026-02-16 04:49:22.601856 | orchestrator | "quorum_con": "4540138322906710015", 2026-02-16 04:49:22.601867 | orchestrator | "quorum_mon": [ 2026-02-16 04:49:22.601878 | orchestrator | "kraken", 2026-02-16 04:49:22.601889 | orchestrator | "luminous", 2026-02-16 04:49:22.601900 | orchestrator | "mimic", 2026-02-16 04:49:22.601910 | orchestrator | "osdmap-prune", 2026-02-16 04:49:22.601921 | orchestrator | "nautilus", 2026-02-16 04:49:22.601932 | orchestrator | "octopus", 2026-02-16 04:49:22.601943 | orchestrator | "pacific", 2026-02-16 04:49:22.601953 | orchestrator | "elector-pinging", 2026-02-16 04:49:22.601999 | orchestrator | "quincy", 2026-02-16 04:49:22.602089 | orchestrator | "reef" 2026-02-16 04:49:22.602104 | orchestrator | ] 2026-02-16 04:49:22.602115 | orchestrator | }, 2026-02-16 04:49:22.602126 | orchestrator | "monmap": { 2026-02-16 04:49:22.602137 | orchestrator | "epoch": 1, 2026-02-16 04:49:22.602148 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-02-16 04:49:22.602160 | orchestrator | "modified": "2026-02-16T03:40:11.184705Z", 2026-02-16 04:49:22.602172 | orchestrator | "created": "2026-02-16T03:40:11.184705Z", 2026-02-16 04:49:22.602185 | orchestrator | "min_mon_release": 18, 2026-02-16 04:49:22.602197 | orchestrator | "min_mon_release_name": "reef", 2026-02-16 04:49:22.602210 | orchestrator | "election_strategy": 1, 2026-02-16 04:49:22.602222 | orchestrator | "disallowed_leaders: ": "", 2026-02-16 04:49:22.602234 | orchestrator | "stretch_mode": false, 2026-02-16 04:49:22.602246 | orchestrator | "tiebreaker_mon": "", 2026-02-16 04:49:22.602258 | orchestrator | "removed_ranks: ": "", 2026-02-16 04:49:22.602270 | orchestrator | "features": { 2026-02-16 04:49:22.602283 | orchestrator | "persistent": [ 2026-02-16 04:49:22.602295 | orchestrator | "kraken", 2026-02-16 04:49:22.602334 | orchestrator | "luminous", 2026-02-16 04:49:22.602347 | orchestrator | "mimic", 2026-02-16 04:49:22.602359 | orchestrator | "osdmap-prune", 2026-02-16 04:49:22.602372 | orchestrator | "nautilus", 2026-02-16 04:49:22.602384 | orchestrator | "octopus", 2026-02-16 04:49:22.602396 | orchestrator | "pacific", 2026-02-16 04:49:22.602408 | orchestrator | "elector-pinging", 2026-02-16 04:49:22.602420 | orchestrator | "quincy", 2026-02-16 04:49:22.602433 | orchestrator | "reef" 2026-02-16 04:49:22.602445 | orchestrator | ], 2026-02-16 04:49:22.602457 | orchestrator | "optional": [] 2026-02-16 04:49:22.602469 | orchestrator | }, 2026-02-16 04:49:22.602499 | orchestrator | "mons": [ 2026-02-16 04:49:22.602512 | orchestrator | { 2026-02-16 04:49:22.602525 | orchestrator | "rank": 0, 2026-02-16 04:49:22.602536 | orchestrator | "name": "testbed-node-1", 2026-02-16 04:49:22.602547 | orchestrator | "public_addrs": { 2026-02-16 04:49:22.602558 | orchestrator | "addrvec": [ 2026-02-16 04:49:22.602569 | orchestrator | { 2026-02-16 04:49:22.602579 | orchestrator | "type": "v2", 2026-02-16 04:49:22.602591 | orchestrator | "addr": "192.168.16.8:3300", 2026-02-16 04:49:22.602602 | orchestrator | "nonce": 0 2026-02-16 04:49:22.602613 | orchestrator | }, 2026-02-16 04:49:22.602623 | orchestrator | { 2026-02-16 04:49:22.602634 | orchestrator | "type": "v1", 2026-02-16 04:49:22.602645 | orchestrator | "addr": "192.168.16.8:6789", 2026-02-16 04:49:22.602655 | orchestrator | "nonce": 0 2026-02-16 04:49:22.602666 | orchestrator | } 2026-02-16 04:49:22.602680 | orchestrator | ] 2026-02-16 04:49:22.602698 | orchestrator | }, 2026-02-16 04:49:22.602716 | orchestrator | "addr": "192.168.16.8:6789/0", 2026-02-16 04:49:22.602744 | orchestrator | "public_addr": "192.168.16.8:6789/0", 2026-02-16 04:49:22.602763 | orchestrator | "priority": 0, 2026-02-16 04:49:22.602780 | orchestrator | "weight": 0, 2026-02-16 04:49:22.602798 | orchestrator | "crush_location": "{}" 2026-02-16 04:49:22.602815 | orchestrator | }, 2026-02-16 04:49:22.602830 | orchestrator | { 2026-02-16 04:49:22.602849 | orchestrator | "rank": 1, 2026-02-16 04:49:22.602867 | orchestrator | "name": "testbed-node-0", 2026-02-16 04:49:22.602884 | orchestrator | "public_addrs": { 2026-02-16 04:49:22.602901 | orchestrator | "addrvec": [ 2026-02-16 04:49:22.602919 | orchestrator | { 2026-02-16 04:49:22.602937 | orchestrator | "type": "v2", 2026-02-16 04:49:22.602956 | orchestrator | "addr": "192.168.16.10:3300", 2026-02-16 04:49:22.603002 | orchestrator | "nonce": 0 2026-02-16 04:49:22.603019 | orchestrator | }, 2026-02-16 04:49:22.603031 | orchestrator | { 2026-02-16 04:49:22.603041 | orchestrator | "type": "v1", 2026-02-16 04:49:22.603052 | orchestrator | "addr": "192.168.16.10:6789", 2026-02-16 04:49:22.603062 | orchestrator | "nonce": 0 2026-02-16 04:49:22.603073 | orchestrator | } 2026-02-16 04:49:22.603084 | orchestrator | ] 2026-02-16 04:49:22.603095 | orchestrator | }, 2026-02-16 04:49:22.603106 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-02-16 04:49:22.603119 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-02-16 04:49:22.603138 | orchestrator | "priority": 0, 2026-02-16 04:49:22.603156 | orchestrator | "weight": 0, 2026-02-16 04:49:22.603174 | orchestrator | "crush_location": "{}" 2026-02-16 04:49:22.603191 | orchestrator | }, 2026-02-16 04:49:22.603210 | orchestrator | { 2026-02-16 04:49:22.603228 | orchestrator | "rank": 2, 2026-02-16 04:49:22.603247 | orchestrator | "name": "testbed-node-2", 2026-02-16 04:49:22.603295 | orchestrator | "public_addrs": { 2026-02-16 04:49:22.603308 | orchestrator | "addrvec": [ 2026-02-16 04:49:22.603319 | orchestrator | { 2026-02-16 04:49:22.603329 | orchestrator | "type": "v2", 2026-02-16 04:49:22.603340 | orchestrator | "addr": "192.168.16.12:3300", 2026-02-16 04:49:22.603351 | orchestrator | "nonce": 0 2026-02-16 04:49:22.603362 | orchestrator | }, 2026-02-16 04:49:22.603373 | orchestrator | { 2026-02-16 04:49:22.603383 | orchestrator | "type": "v1", 2026-02-16 04:49:22.603394 | orchestrator | "addr": "192.168.16.12:6789", 2026-02-16 04:49:22.603405 | orchestrator | "nonce": 0 2026-02-16 04:49:22.603416 | orchestrator | } 2026-02-16 04:49:22.603426 | orchestrator | ] 2026-02-16 04:49:22.603437 | orchestrator | }, 2026-02-16 04:49:22.603448 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-02-16 04:49:22.603459 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-02-16 04:49:22.603469 | orchestrator | "priority": 0, 2026-02-16 04:49:22.603492 | orchestrator | "weight": 0, 2026-02-16 04:49:22.603502 | orchestrator | "crush_location": "{}" 2026-02-16 04:49:22.603513 | orchestrator | } 2026-02-16 04:49:22.603524 | orchestrator | ] 2026-02-16 04:49:22.603535 | orchestrator | } 2026-02-16 04:49:22.603546 | orchestrator | } 2026-02-16 04:49:22.603557 | orchestrator | 2026-02-16 04:49:22.603568 | orchestrator | + echo 2026-02-16 04:49:22.603579 | orchestrator | + echo '# Ceph free space status' 2026-02-16 04:49:22.603589 | orchestrator | # Ceph free space status 2026-02-16 04:49:22.603600 | orchestrator | 2026-02-16 04:49:22.603611 | orchestrator | + echo 2026-02-16 04:49:22.603622 | orchestrator | + ceph df 2026-02-16 04:49:23.198229 | orchestrator | --- RAW STORAGE --- 2026-02-16 04:49:23.198324 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-02-16 04:49:23.198349 | orchestrator | hdd 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.88 2026-02-16 04:49:23.198371 | orchestrator | TOTAL 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.88 2026-02-16 04:49:23.198381 | orchestrator | 2026-02-16 04:49:23.198391 | orchestrator | --- POOLS --- 2026-02-16 04:49:23.198400 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-02-16 04:49:23.198410 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-02-16 04:49:23.198419 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-02-16 04:49:23.198428 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-02-16 04:49:23.198436 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-02-16 04:49:23.198445 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-02-16 04:49:23.198455 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-02-16 04:49:23.198464 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-02-16 04:49:23.198472 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-02-16 04:49:23.198481 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-02-16 04:49:23.198489 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-02-16 04:49:23.198498 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-02-16 04:49:23.198507 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.95 35 GiB 2026-02-16 04:49:23.198515 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-02-16 04:49:23.198524 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-02-16 04:49:23.244452 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-16 04:49:23.300503 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-16 04:49:23.300599 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-02-16 04:49:23.300614 | orchestrator | + osism apply facts 2026-02-16 04:49:25.397658 | orchestrator | 2026-02-16 04:49:25 | INFO  | Task 46873efa-a43a-4d21-bba5-37bff4ccb879 (facts) was prepared for execution. 2026-02-16 04:49:25.397757 | orchestrator | 2026-02-16 04:49:25 | INFO  | It takes a moment until task 46873efa-a43a-4d21-bba5-37bff4ccb879 (facts) has been started and output is visible here. 2026-02-16 04:49:39.951908 | orchestrator | 2026-02-16 04:49:39.952117 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-16 04:49:39.952146 | orchestrator | 2026-02-16 04:49:39.952167 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-16 04:49:39.952181 | orchestrator | Monday 16 February 2026 04:49:29 +0000 (0:00:00.273) 0:00:00.273 ******* 2026-02-16 04:49:39.952192 | orchestrator | ok: [testbed-manager] 2026-02-16 04:49:39.952204 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:49:39.952215 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:49:39.952225 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:49:39.952236 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:49:39.952247 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:49:39.952257 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:49:39.952268 | orchestrator | 2026-02-16 04:49:39.952279 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-16 04:49:39.952317 | orchestrator | Monday 16 February 2026 04:49:31 +0000 (0:00:01.170) 0:00:01.444 ******* 2026-02-16 04:49:39.952328 | orchestrator | skipping: [testbed-manager] 2026-02-16 04:49:39.952340 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:49:39.952351 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:49:39.952361 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:49:39.952372 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:49:39.952382 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:49:39.952393 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:49:39.952403 | orchestrator | 2026-02-16 04:49:39.952414 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-16 04:49:39.952425 | orchestrator | 2026-02-16 04:49:39.952436 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-16 04:49:39.952447 | orchestrator | Monday 16 February 2026 04:49:32 +0000 (0:00:01.396) 0:00:02.840 ******* 2026-02-16 04:49:39.952460 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:49:39.952479 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:49:39.952494 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:49:39.952507 | orchestrator | ok: [testbed-manager] 2026-02-16 04:49:39.952520 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:49:39.952532 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:49:39.952545 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:49:39.952555 | orchestrator | 2026-02-16 04:49:39.952566 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-16 04:49:39.952584 | orchestrator | 2026-02-16 04:49:39.952597 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-16 04:49:39.952608 | orchestrator | Monday 16 February 2026 04:49:38 +0000 (0:00:06.546) 0:00:09.387 ******* 2026-02-16 04:49:39.952619 | orchestrator | skipping: [testbed-manager] 2026-02-16 04:49:39.952629 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:49:39.952640 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:49:39.952651 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:49:39.952661 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:49:39.952672 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:49:39.952683 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:49:39.952693 | orchestrator | 2026-02-16 04:49:39.952704 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:49:39.952715 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 04:49:39.952728 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 04:49:39.952739 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 04:49:39.952765 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 04:49:39.952776 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 04:49:39.952787 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 04:49:39.952797 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 04:49:39.952808 | orchestrator | 2026-02-16 04:49:39.952819 | orchestrator | 2026-02-16 04:49:39.952830 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:49:39.952840 | orchestrator | Monday 16 February 2026 04:49:39 +0000 (0:00:00.557) 0:00:09.945 ******* 2026-02-16 04:49:39.952851 | orchestrator | =============================================================================== 2026-02-16 04:49:39.952862 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.55s 2026-02-16 04:49:39.952880 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.40s 2026-02-16 04:49:39.952891 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.17s 2026-02-16 04:49:39.952902 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-02-16 04:49:40.273406 | orchestrator | + osism validate ceph-mons 2026-02-16 04:50:13.734813 | orchestrator | 2026-02-16 04:50:13.734897 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-02-16 04:50:13.734907 | orchestrator | 2026-02-16 04:50:13.734913 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-16 04:50:13.734919 | orchestrator | Monday 16 February 2026 04:49:57 +0000 (0:00:00.447) 0:00:00.447 ******* 2026-02-16 04:50:13.734924 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-16 04:50:13.734929 | orchestrator | 2026-02-16 04:50:13.734952 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-16 04:50:13.734957 | orchestrator | Monday 16 February 2026 04:49:58 +0000 (0:00:01.859) 0:00:02.306 ******* 2026-02-16 04:50:13.734962 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-16 04:50:13.734967 | orchestrator | 2026-02-16 04:50:13.734972 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-16 04:50:13.734977 | orchestrator | Monday 16 February 2026 04:49:59 +0000 (0:00:00.938) 0:00:03.245 ******* 2026-02-16 04:50:13.734982 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:13.734987 | orchestrator | 2026-02-16 04:50:13.734992 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-16 04:50:13.734997 | orchestrator | Monday 16 February 2026 04:49:59 +0000 (0:00:00.150) 0:00:03.396 ******* 2026-02-16 04:50:13.735002 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:13.735006 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:50:13.735011 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:50:13.735015 | orchestrator | 2026-02-16 04:50:13.735020 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-16 04:50:13.735025 | orchestrator | Monday 16 February 2026 04:50:00 +0000 (0:00:00.342) 0:00:03.739 ******* 2026-02-16 04:50:13.735029 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:50:13.735034 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:50:13.735038 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:13.735043 | orchestrator | 2026-02-16 04:50:13.735048 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-16 04:50:13.735052 | orchestrator | Monday 16 February 2026 04:50:01 +0000 (0:00:01.013) 0:00:04.753 ******* 2026-02-16 04:50:13.735057 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:13.735062 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:50:13.735067 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:50:13.735071 | orchestrator | 2026-02-16 04:50:13.735076 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-16 04:50:13.735081 | orchestrator | Monday 16 February 2026 04:50:01 +0000 (0:00:00.299) 0:00:05.052 ******* 2026-02-16 04:50:13.735085 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:13.735090 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:50:13.735094 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:50:13.735099 | orchestrator | 2026-02-16 04:50:13.735103 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-16 04:50:13.735108 | orchestrator | Monday 16 February 2026 04:50:02 +0000 (0:00:00.521) 0:00:05.573 ******* 2026-02-16 04:50:13.735113 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:13.735117 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:50:13.735122 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:50:13.735126 | orchestrator | 2026-02-16 04:50:13.735131 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-02-16 04:50:13.735136 | orchestrator | Monday 16 February 2026 04:50:02 +0000 (0:00:00.307) 0:00:05.881 ******* 2026-02-16 04:50:13.735140 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:13.735159 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:50:13.735164 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:50:13.735169 | orchestrator | 2026-02-16 04:50:13.735173 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-02-16 04:50:13.735178 | orchestrator | Monday 16 February 2026 04:50:02 +0000 (0:00:00.297) 0:00:06.178 ******* 2026-02-16 04:50:13.735182 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:13.735187 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:50:13.735191 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:50:13.735197 | orchestrator | 2026-02-16 04:50:13.735201 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-16 04:50:13.735206 | orchestrator | Monday 16 February 2026 04:50:03 +0000 (0:00:00.501) 0:00:06.680 ******* 2026-02-16 04:50:13.735210 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:13.735215 | orchestrator | 2026-02-16 04:50:13.735219 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-16 04:50:13.735224 | orchestrator | Monday 16 February 2026 04:50:03 +0000 (0:00:00.240) 0:00:06.920 ******* 2026-02-16 04:50:13.735229 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:13.735233 | orchestrator | 2026-02-16 04:50:13.735238 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-16 04:50:13.735242 | orchestrator | Monday 16 February 2026 04:50:03 +0000 (0:00:00.267) 0:00:07.187 ******* 2026-02-16 04:50:13.735247 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:13.735251 | orchestrator | 2026-02-16 04:50:13.735256 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-16 04:50:13.735260 | orchestrator | Monday 16 February 2026 04:50:04 +0000 (0:00:00.246) 0:00:07.434 ******* 2026-02-16 04:50:13.735265 | orchestrator | 2026-02-16 04:50:13.735270 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-16 04:50:13.735274 | orchestrator | Monday 16 February 2026 04:50:04 +0000 (0:00:00.070) 0:00:07.504 ******* 2026-02-16 04:50:13.735279 | orchestrator | 2026-02-16 04:50:13.735283 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-16 04:50:13.735288 | orchestrator | Monday 16 February 2026 04:50:04 +0000 (0:00:00.070) 0:00:07.575 ******* 2026-02-16 04:50:13.735292 | orchestrator | 2026-02-16 04:50:13.735297 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-16 04:50:13.735301 | orchestrator | Monday 16 February 2026 04:50:04 +0000 (0:00:00.075) 0:00:07.651 ******* 2026-02-16 04:50:13.735305 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:13.735310 | orchestrator | 2026-02-16 04:50:13.735314 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-16 04:50:13.735327 | orchestrator | Monday 16 February 2026 04:50:04 +0000 (0:00:00.271) 0:00:07.923 ******* 2026-02-16 04:50:13.735332 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:13.735336 | orchestrator | 2026-02-16 04:50:13.735352 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-02-16 04:50:13.735357 | orchestrator | Monday 16 February 2026 04:50:04 +0000 (0:00:00.241) 0:00:08.164 ******* 2026-02-16 04:50:13.735362 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:13.735368 | orchestrator | 2026-02-16 04:50:13.735373 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-02-16 04:50:13.735378 | orchestrator | Monday 16 February 2026 04:50:04 +0000 (0:00:00.128) 0:00:08.292 ******* 2026-02-16 04:50:13.735383 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:50:13.735392 | orchestrator | 2026-02-16 04:50:13.735397 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-02-16 04:50:13.735402 | orchestrator | Monday 16 February 2026 04:50:06 +0000 (0:00:01.687) 0:00:09.980 ******* 2026-02-16 04:50:13.735407 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:13.735412 | orchestrator | 2026-02-16 04:50:13.735417 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-02-16 04:50:13.735423 | orchestrator | Monday 16 February 2026 04:50:07 +0000 (0:00:00.501) 0:00:10.481 ******* 2026-02-16 04:50:13.735432 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:13.735437 | orchestrator | 2026-02-16 04:50:13.735443 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-02-16 04:50:13.735448 | orchestrator | Monday 16 February 2026 04:50:07 +0000 (0:00:00.124) 0:00:10.606 ******* 2026-02-16 04:50:13.735453 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:13.735458 | orchestrator | 2026-02-16 04:50:13.735463 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-02-16 04:50:13.735469 | orchestrator | Monday 16 February 2026 04:50:07 +0000 (0:00:00.326) 0:00:10.933 ******* 2026-02-16 04:50:13.735474 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:13.735479 | orchestrator | 2026-02-16 04:50:13.735484 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-02-16 04:50:13.735489 | orchestrator | Monday 16 February 2026 04:50:07 +0000 (0:00:00.311) 0:00:11.244 ******* 2026-02-16 04:50:13.735494 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:13.735499 | orchestrator | 2026-02-16 04:50:13.735504 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-02-16 04:50:13.735509 | orchestrator | Monday 16 February 2026 04:50:07 +0000 (0:00:00.108) 0:00:11.353 ******* 2026-02-16 04:50:13.735514 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:13.735519 | orchestrator | 2026-02-16 04:50:13.735524 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-02-16 04:50:13.735530 | orchestrator | Monday 16 February 2026 04:50:08 +0000 (0:00:00.133) 0:00:11.486 ******* 2026-02-16 04:50:13.735535 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:13.735540 | orchestrator | 2026-02-16 04:50:13.735545 | orchestrator | TASK [Gather status data] ****************************************************** 2026-02-16 04:50:13.735550 | orchestrator | Monday 16 February 2026 04:50:08 +0000 (0:00:00.140) 0:00:11.627 ******* 2026-02-16 04:50:13.735556 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:50:13.735561 | orchestrator | 2026-02-16 04:50:13.735566 | orchestrator | TASK [Set health test data] **************************************************** 2026-02-16 04:50:13.735571 | orchestrator | Monday 16 February 2026 04:50:09 +0000 (0:00:01.365) 0:00:12.993 ******* 2026-02-16 04:50:13.735576 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:13.735581 | orchestrator | 2026-02-16 04:50:13.735586 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-02-16 04:50:13.735591 | orchestrator | Monday 16 February 2026 04:50:09 +0000 (0:00:00.305) 0:00:13.299 ******* 2026-02-16 04:50:13.735597 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:13.735602 | orchestrator | 2026-02-16 04:50:13.735607 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-02-16 04:50:13.735612 | orchestrator | Monday 16 February 2026 04:50:10 +0000 (0:00:00.155) 0:00:13.455 ******* 2026-02-16 04:50:13.735618 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:13.735623 | orchestrator | 2026-02-16 04:50:13.735628 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-02-16 04:50:13.735633 | orchestrator | Monday 16 February 2026 04:50:10 +0000 (0:00:00.152) 0:00:13.607 ******* 2026-02-16 04:50:13.735638 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:13.735643 | orchestrator | 2026-02-16 04:50:13.735648 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-02-16 04:50:13.735653 | orchestrator | Monday 16 February 2026 04:50:10 +0000 (0:00:00.136) 0:00:13.744 ******* 2026-02-16 04:50:13.735662 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:13.735668 | orchestrator | 2026-02-16 04:50:13.735673 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-16 04:50:13.735678 | orchestrator | Monday 16 February 2026 04:50:10 +0000 (0:00:00.350) 0:00:14.094 ******* 2026-02-16 04:50:13.735683 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-16 04:50:13.735689 | orchestrator | 2026-02-16 04:50:13.735694 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-16 04:50:13.735699 | orchestrator | Monday 16 February 2026 04:50:10 +0000 (0:00:00.254) 0:00:14.349 ******* 2026-02-16 04:50:13.735708 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:13.735744 | orchestrator | 2026-02-16 04:50:13.735749 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-16 04:50:13.735754 | orchestrator | Monday 16 February 2026 04:50:11 +0000 (0:00:00.264) 0:00:14.613 ******* 2026-02-16 04:50:13.735758 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-16 04:50:13.735763 | orchestrator | 2026-02-16 04:50:13.735770 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-16 04:50:13.735778 | orchestrator | Monday 16 February 2026 04:50:12 +0000 (0:00:01.751) 0:00:16.364 ******* 2026-02-16 04:50:13.735786 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-16 04:50:13.735794 | orchestrator | 2026-02-16 04:50:13.735802 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-16 04:50:13.735809 | orchestrator | Monday 16 February 2026 04:50:13 +0000 (0:00:00.262) 0:00:16.627 ******* 2026-02-16 04:50:13.735816 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-16 04:50:13.735823 | orchestrator | 2026-02-16 04:50:13.735837 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-16 04:50:16.399442 | orchestrator | Monday 16 February 2026 04:50:13 +0000 (0:00:00.264) 0:00:16.892 ******* 2026-02-16 04:50:16.399574 | orchestrator | 2026-02-16 04:50:16.399600 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-16 04:50:16.399622 | orchestrator | Monday 16 February 2026 04:50:13 +0000 (0:00:00.075) 0:00:16.968 ******* 2026-02-16 04:50:16.399641 | orchestrator | 2026-02-16 04:50:16.399661 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-16 04:50:16.399673 | orchestrator | Monday 16 February 2026 04:50:13 +0000 (0:00:00.072) 0:00:17.040 ******* 2026-02-16 04:50:16.399684 | orchestrator | 2026-02-16 04:50:16.399695 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-16 04:50:16.399706 | orchestrator | Monday 16 February 2026 04:50:13 +0000 (0:00:00.077) 0:00:17.118 ******* 2026-02-16 04:50:16.399717 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-16 04:50:16.399727 | orchestrator | 2026-02-16 04:50:16.399738 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-16 04:50:16.399749 | orchestrator | Monday 16 February 2026 04:50:15 +0000 (0:00:01.488) 0:00:18.607 ******* 2026-02-16 04:50:16.399760 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-16 04:50:16.399771 | orchestrator |  "msg": [ 2026-02-16 04:50:16.399783 | orchestrator |  "Validator run completed.", 2026-02-16 04:50:16.399795 | orchestrator |  "You can find the report file here:", 2026-02-16 04:50:16.399806 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-02-16T04:49:57+00:00-report.json", 2026-02-16 04:50:16.399818 | orchestrator |  "on the following host:", 2026-02-16 04:50:16.399829 | orchestrator |  "testbed-manager" 2026-02-16 04:50:16.399839 | orchestrator |  ] 2026-02-16 04:50:16.399851 | orchestrator | } 2026-02-16 04:50:16.399862 | orchestrator | 2026-02-16 04:50:16.399873 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:50:16.399885 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-16 04:50:16.399897 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 04:50:16.399909 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 04:50:16.399919 | orchestrator | 2026-02-16 04:50:16.399962 | orchestrator | 2026-02-16 04:50:16.399977 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:50:16.399990 | orchestrator | Monday 16 February 2026 04:50:16 +0000 (0:00:00.862) 0:00:19.470 ******* 2026-02-16 04:50:16.400033 | orchestrator | =============================================================================== 2026-02-16 04:50:16.400046 | orchestrator | Get timestamp for report file ------------------------------------------- 1.86s 2026-02-16 04:50:16.400059 | orchestrator | Aggregate test results step one ----------------------------------------- 1.75s 2026-02-16 04:50:16.400071 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.69s 2026-02-16 04:50:16.400084 | orchestrator | Write report file ------------------------------------------------------- 1.49s 2026-02-16 04:50:16.400096 | orchestrator | Gather status data ------------------------------------------------------ 1.37s 2026-02-16 04:50:16.400108 | orchestrator | Get container info ------------------------------------------------------ 1.01s 2026-02-16 04:50:16.400121 | orchestrator | Create report output directory ------------------------------------------ 0.94s 2026-02-16 04:50:16.400133 | orchestrator | Print report file information ------------------------------------------- 0.86s 2026-02-16 04:50:16.400145 | orchestrator | Set test result to passed if container is existing ---------------------- 0.52s 2026-02-16 04:50:16.400158 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.50s 2026-02-16 04:50:16.400187 | orchestrator | Set quorum test data ---------------------------------------------------- 0.50s 2026-02-16 04:50:16.400199 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.35s 2026-02-16 04:50:16.400212 | orchestrator | Prepare test data for container existance test -------------------------- 0.34s 2026-02-16 04:50:16.400224 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2026-02-16 04:50:16.400236 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.31s 2026-02-16 04:50:16.400248 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2026-02-16 04:50:16.400260 | orchestrator | Set health test data ---------------------------------------------------- 0.31s 2026-02-16 04:50:16.400273 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2026-02-16 04:50:16.400285 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.30s 2026-02-16 04:50:16.400297 | orchestrator | Print report file information ------------------------------------------- 0.27s 2026-02-16 04:50:16.754395 | orchestrator | + osism validate ceph-mgrs 2026-02-16 04:50:47.978441 | orchestrator | 2026-02-16 04:50:47.978554 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-02-16 04:50:47.978571 | orchestrator | 2026-02-16 04:50:47.978584 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-16 04:50:47.978596 | orchestrator | Monday 16 February 2026 04:50:33 +0000 (0:00:00.434) 0:00:00.434 ******* 2026-02-16 04:50:47.978608 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-16 04:50:47.978619 | orchestrator | 2026-02-16 04:50:47.978630 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-16 04:50:47.978641 | orchestrator | Monday 16 February 2026 04:50:34 +0000 (0:00:00.825) 0:00:01.259 ******* 2026-02-16 04:50:47.978652 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-16 04:50:47.978663 | orchestrator | 2026-02-16 04:50:47.978713 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-16 04:50:47.978724 | orchestrator | Monday 16 February 2026 04:50:35 +0000 (0:00:00.997) 0:00:02.256 ******* 2026-02-16 04:50:47.978735 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:47.978747 | orchestrator | 2026-02-16 04:50:47.978758 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-16 04:50:47.978769 | orchestrator | Monday 16 February 2026 04:50:35 +0000 (0:00:00.142) 0:00:02.399 ******* 2026-02-16 04:50:47.978780 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:47.978791 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:50:47.978802 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:50:47.978813 | orchestrator | 2026-02-16 04:50:47.978824 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-16 04:50:47.978835 | orchestrator | Monday 16 February 2026 04:50:35 +0000 (0:00:00.315) 0:00:02.715 ******* 2026-02-16 04:50:47.978869 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:47.978881 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:50:47.978891 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:50:47.978902 | orchestrator | 2026-02-16 04:50:47.978947 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-16 04:50:47.978959 | orchestrator | Monday 16 February 2026 04:50:36 +0000 (0:00:01.016) 0:00:03.731 ******* 2026-02-16 04:50:47.978971 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:47.978985 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:50:47.978998 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:50:47.979010 | orchestrator | 2026-02-16 04:50:47.979023 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-16 04:50:47.979036 | orchestrator | Monday 16 February 2026 04:50:37 +0000 (0:00:00.325) 0:00:04.056 ******* 2026-02-16 04:50:47.979049 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:47.979061 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:50:47.979074 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:50:47.979087 | orchestrator | 2026-02-16 04:50:47.979099 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-16 04:50:47.979113 | orchestrator | Monday 16 February 2026 04:50:37 +0000 (0:00:00.502) 0:00:04.559 ******* 2026-02-16 04:50:47.979125 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:47.979137 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:50:47.979149 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:50:47.979161 | orchestrator | 2026-02-16 04:50:47.979174 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-02-16 04:50:47.979186 | orchestrator | Monday 16 February 2026 04:50:37 +0000 (0:00:00.309) 0:00:04.868 ******* 2026-02-16 04:50:47.979198 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:47.979210 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:50:47.979223 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:50:47.979236 | orchestrator | 2026-02-16 04:50:47.979248 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-02-16 04:50:47.979260 | orchestrator | Monday 16 February 2026 04:50:38 +0000 (0:00:00.300) 0:00:05.169 ******* 2026-02-16 04:50:47.979272 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:47.979285 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:50:47.979297 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:50:47.979309 | orchestrator | 2026-02-16 04:50:47.979322 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-16 04:50:47.979334 | orchestrator | Monday 16 February 2026 04:50:38 +0000 (0:00:00.494) 0:00:05.663 ******* 2026-02-16 04:50:47.979347 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:47.979359 | orchestrator | 2026-02-16 04:50:47.979369 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-16 04:50:47.979380 | orchestrator | Monday 16 February 2026 04:50:38 +0000 (0:00:00.243) 0:00:05.907 ******* 2026-02-16 04:50:47.979391 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:47.979402 | orchestrator | 2026-02-16 04:50:47.979413 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-16 04:50:47.979424 | orchestrator | Monday 16 February 2026 04:50:39 +0000 (0:00:00.273) 0:00:06.180 ******* 2026-02-16 04:50:47.979448 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:47.979469 | orchestrator | 2026-02-16 04:50:47.979487 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-16 04:50:47.979508 | orchestrator | Monday 16 February 2026 04:50:39 +0000 (0:00:00.250) 0:00:06.431 ******* 2026-02-16 04:50:47.979526 | orchestrator | 2026-02-16 04:50:47.979542 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-16 04:50:47.979559 | orchestrator | Monday 16 February 2026 04:50:39 +0000 (0:00:00.072) 0:00:06.503 ******* 2026-02-16 04:50:47.979576 | orchestrator | 2026-02-16 04:50:47.979593 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-16 04:50:47.979611 | orchestrator | Monday 16 February 2026 04:50:39 +0000 (0:00:00.071) 0:00:06.575 ******* 2026-02-16 04:50:47.979641 | orchestrator | 2026-02-16 04:50:47.979660 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-16 04:50:47.979678 | orchestrator | Monday 16 February 2026 04:50:39 +0000 (0:00:00.076) 0:00:06.652 ******* 2026-02-16 04:50:47.979695 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:47.979714 | orchestrator | 2026-02-16 04:50:47.979731 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-16 04:50:47.979749 | orchestrator | Monday 16 February 2026 04:50:39 +0000 (0:00:00.238) 0:00:06.890 ******* 2026-02-16 04:50:47.979768 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:47.979786 | orchestrator | 2026-02-16 04:50:47.979832 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-02-16 04:50:47.979854 | orchestrator | Monday 16 February 2026 04:50:40 +0000 (0:00:00.250) 0:00:07.141 ******* 2026-02-16 04:50:47.979874 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:47.979886 | orchestrator | 2026-02-16 04:50:47.979897 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-02-16 04:50:47.979908 | orchestrator | Monday 16 February 2026 04:50:40 +0000 (0:00:00.115) 0:00:07.257 ******* 2026-02-16 04:50:47.979941 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:50:47.979952 | orchestrator | 2026-02-16 04:50:47.979963 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-02-16 04:50:47.979974 | orchestrator | Monday 16 February 2026 04:50:42 +0000 (0:00:02.049) 0:00:09.306 ******* 2026-02-16 04:50:47.979985 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:47.979996 | orchestrator | 2026-02-16 04:50:47.980025 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-02-16 04:50:47.980037 | orchestrator | Monday 16 February 2026 04:50:42 +0000 (0:00:00.436) 0:00:09.742 ******* 2026-02-16 04:50:47.980048 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:47.980059 | orchestrator | 2026-02-16 04:50:47.980070 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-02-16 04:50:47.980080 | orchestrator | Monday 16 February 2026 04:50:43 +0000 (0:00:00.317) 0:00:10.060 ******* 2026-02-16 04:50:47.980091 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:47.980102 | orchestrator | 2026-02-16 04:50:47.980113 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-02-16 04:50:47.980124 | orchestrator | Monday 16 February 2026 04:50:43 +0000 (0:00:00.144) 0:00:10.205 ******* 2026-02-16 04:50:47.980134 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:50:47.980145 | orchestrator | 2026-02-16 04:50:47.980156 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-16 04:50:47.980166 | orchestrator | Monday 16 February 2026 04:50:43 +0000 (0:00:00.137) 0:00:10.342 ******* 2026-02-16 04:50:47.980177 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-16 04:50:47.980188 | orchestrator | 2026-02-16 04:50:47.980198 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-16 04:50:47.980209 | orchestrator | Monday 16 February 2026 04:50:43 +0000 (0:00:00.258) 0:00:10.601 ******* 2026-02-16 04:50:47.980220 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:50:47.980230 | orchestrator | 2026-02-16 04:50:47.980241 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-16 04:50:47.980252 | orchestrator | Monday 16 February 2026 04:50:43 +0000 (0:00:00.275) 0:00:10.876 ******* 2026-02-16 04:50:47.980263 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-16 04:50:47.980274 | orchestrator | 2026-02-16 04:50:47.980284 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-16 04:50:47.980295 | orchestrator | Monday 16 February 2026 04:50:45 +0000 (0:00:01.281) 0:00:12.157 ******* 2026-02-16 04:50:47.980306 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-16 04:50:47.980317 | orchestrator | 2026-02-16 04:50:47.980328 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-16 04:50:47.980338 | orchestrator | Monday 16 February 2026 04:50:45 +0000 (0:00:00.276) 0:00:12.434 ******* 2026-02-16 04:50:47.980359 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-16 04:50:47.980370 | orchestrator | 2026-02-16 04:50:47.980380 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-16 04:50:47.980391 | orchestrator | Monday 16 February 2026 04:50:45 +0000 (0:00:00.283) 0:00:12.718 ******* 2026-02-16 04:50:47.980402 | orchestrator | 2026-02-16 04:50:47.980413 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-16 04:50:47.980423 | orchestrator | Monday 16 February 2026 04:50:45 +0000 (0:00:00.074) 0:00:12.792 ******* 2026-02-16 04:50:47.980434 | orchestrator | 2026-02-16 04:50:47.980445 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-16 04:50:47.980455 | orchestrator | Monday 16 February 2026 04:50:45 +0000 (0:00:00.071) 0:00:12.863 ******* 2026-02-16 04:50:47.980466 | orchestrator | 2026-02-16 04:50:47.980477 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-16 04:50:47.980487 | orchestrator | Monday 16 February 2026 04:50:46 +0000 (0:00:00.272) 0:00:13.136 ******* 2026-02-16 04:50:47.980498 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-16 04:50:47.980509 | orchestrator | 2026-02-16 04:50:47.980520 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-16 04:50:47.980531 | orchestrator | Monday 16 February 2026 04:50:47 +0000 (0:00:01.343) 0:00:14.480 ******* 2026-02-16 04:50:47.980541 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-16 04:50:47.980552 | orchestrator |  "msg": [ 2026-02-16 04:50:47.980563 | orchestrator |  "Validator run completed.", 2026-02-16 04:50:47.980579 | orchestrator |  "You can find the report file here:", 2026-02-16 04:50:47.980590 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-02-16T04:50:34+00:00-report.json", 2026-02-16 04:50:47.980602 | orchestrator |  "on the following host:", 2026-02-16 04:50:47.980613 | orchestrator |  "testbed-manager" 2026-02-16 04:50:47.980624 | orchestrator |  ] 2026-02-16 04:50:47.980635 | orchestrator | } 2026-02-16 04:50:47.980646 | orchestrator | 2026-02-16 04:50:47.980657 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:50:47.980669 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-16 04:50:47.980681 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 04:50:47.980702 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 04:50:48.320035 | orchestrator | 2026-02-16 04:50:48.320119 | orchestrator | 2026-02-16 04:50:48.320129 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:50:48.320139 | orchestrator | Monday 16 February 2026 04:50:47 +0000 (0:00:00.405) 0:00:14.886 ******* 2026-02-16 04:50:48.320147 | orchestrator | =============================================================================== 2026-02-16 04:50:48.320154 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.05s 2026-02-16 04:50:48.320162 | orchestrator | Write report file ------------------------------------------------------- 1.34s 2026-02-16 04:50:48.320169 | orchestrator | Aggregate test results step one ----------------------------------------- 1.28s 2026-02-16 04:50:48.320176 | orchestrator | Get container info ------------------------------------------------------ 1.02s 2026-02-16 04:50:48.320184 | orchestrator | Create report output directory ------------------------------------------ 1.00s 2026-02-16 04:50:48.320191 | orchestrator | Get timestamp for report file ------------------------------------------- 0.83s 2026-02-16 04:50:48.320198 | orchestrator | Set test result to passed if container is existing ---------------------- 0.50s 2026-02-16 04:50:48.320205 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.49s 2026-02-16 04:50:48.320234 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.44s 2026-02-16 04:50:48.320242 | orchestrator | Flush handlers ---------------------------------------------------------- 0.42s 2026-02-16 04:50:48.320249 | orchestrator | Print report file information ------------------------------------------- 0.41s 2026-02-16 04:50:48.320256 | orchestrator | Set test result to failed if container is missing ----------------------- 0.33s 2026-02-16 04:50:48.320263 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.32s 2026-02-16 04:50:48.320271 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2026-02-16 04:50:48.320278 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2026-02-16 04:50:48.320285 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.30s 2026-02-16 04:50:48.320292 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2026-02-16 04:50:48.320299 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2026-02-16 04:50:48.320306 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.28s 2026-02-16 04:50:48.320314 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2026-02-16 04:50:48.709198 | orchestrator | + osism validate ceph-osds 2026-02-16 04:51:10.989569 | orchestrator | 2026-02-16 04:51:10.989671 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-02-16 04:51:10.989689 | orchestrator | 2026-02-16 04:51:10.989723 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-16 04:51:10.989738 | orchestrator | Monday 16 February 2026 04:51:05 +0000 (0:00:00.466) 0:00:00.466 ******* 2026-02-16 04:51:10.989752 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-16 04:51:10.989765 | orchestrator | 2026-02-16 04:51:10.989779 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-16 04:51:10.989792 | orchestrator | Monday 16 February 2026 04:51:07 +0000 (0:00:01.840) 0:00:02.306 ******* 2026-02-16 04:51:10.989806 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-16 04:51:10.989819 | orchestrator | 2026-02-16 04:51:10.989832 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-16 04:51:10.989840 | orchestrator | Monday 16 February 2026 04:51:07 +0000 (0:00:00.507) 0:00:02.814 ******* 2026-02-16 04:51:10.989848 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-16 04:51:10.989856 | orchestrator | 2026-02-16 04:51:10.989864 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-16 04:51:10.989872 | orchestrator | Monday 16 February 2026 04:51:08 +0000 (0:00:00.731) 0:00:03.545 ******* 2026-02-16 04:51:10.989880 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:10.989890 | orchestrator | 2026-02-16 04:51:10.989898 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-16 04:51:10.989958 | orchestrator | Monday 16 February 2026 04:51:08 +0000 (0:00:00.132) 0:00:03.678 ******* 2026-02-16 04:51:10.989966 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:51:10.989974 | orchestrator | 2026-02-16 04:51:10.989982 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-16 04:51:10.989990 | orchestrator | Monday 16 February 2026 04:51:08 +0000 (0:00:00.136) 0:00:03.815 ******* 2026-02-16 04:51:10.989998 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:51:10.990006 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:51:10.990078 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:51:10.990113 | orchestrator | 2026-02-16 04:51:10.990129 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-16 04:51:10.990143 | orchestrator | Monday 16 February 2026 04:51:09 +0000 (0:00:00.302) 0:00:04.118 ******* 2026-02-16 04:51:10.990156 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:10.990168 | orchestrator | 2026-02-16 04:51:10.990181 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-16 04:51:10.990231 | orchestrator | Monday 16 February 2026 04:51:09 +0000 (0:00:00.141) 0:00:04.259 ******* 2026-02-16 04:51:10.990244 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:10.990257 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:51:10.990269 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:51:10.990281 | orchestrator | 2026-02-16 04:51:10.990293 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-02-16 04:51:10.990306 | orchestrator | Monday 16 February 2026 04:51:09 +0000 (0:00:00.327) 0:00:04.587 ******* 2026-02-16 04:51:10.990318 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:10.990330 | orchestrator | 2026-02-16 04:51:10.990343 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-16 04:51:10.990355 | orchestrator | Monday 16 February 2026 04:51:10 +0000 (0:00:00.800) 0:00:05.388 ******* 2026-02-16 04:51:10.990367 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:10.990379 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:51:10.990391 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:51:10.990404 | orchestrator | 2026-02-16 04:51:10.990416 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-02-16 04:51:10.990429 | orchestrator | Monday 16 February 2026 04:51:10 +0000 (0:00:00.310) 0:00:05.699 ******* 2026-02-16 04:51:10.990444 | orchestrator | skipping: [testbed-node-3] => (item={'id': '98e59dc60cf278ad80221d381b37c91018fe9ba03fa316e3284fe505af170b58', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-16 04:51:10.990460 | orchestrator | skipping: [testbed-node-3] => (item={'id': '26460d4718afd277b78c3bb8d3c4a5b8a10e6419b18d6ddb7570a1c9a6d61a44', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-16 04:51:10.990474 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7a6f3abc70dc336d2fc1c4c44f26ee6a33c5351a6943ef12b7b1bae98df7a83f', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-02-16 04:51:10.990487 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e8e233bbe1abd6ad7f6e5afa6a6fdc00a75981df80596accb663ac3b07ed607c', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-16 04:51:10.990500 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6e939c4ce3ab400b667b6551a37615e691ff3863577f6953aed3a5491c6e9705', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-16 04:51:10.990537 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c1279b50624916d117fd52b55b2752bfd480d5f7a73cbe0fadcfad574ce9008d', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-16 04:51:10.990551 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9d1f3c366542f6d4d480e0e65102d480a68ccfe33c4812af985be8ecf1678c54', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-16 04:51:10.990563 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9159a4ab2d6a10a8765bc406be1630daac91efe7171c977f5eb89bc40409f29e', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-16 04:51:10.990576 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2835b239e6a73e1fc14b5499d611e471206d225cf6d32fcca5be3bdfa45ccde3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-16 04:51:10.990596 | orchestrator | skipping: [testbed-node-3] => (item={'id': '66e3ba715d79efb7970dcd99fc1ffda464d8cac104f81f67e168de0e65194640', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-16 04:51:10.990610 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fc1219dd7bf94d6861df3a4c853d6c3cbe57eb859a25caf720a60c5b6504197b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-16 04:51:10.990626 | orchestrator | ok: [testbed-node-3] => (item={'id': 'b79d08dc6f9cd11280f3ccad56311d121b130c60908099f5a8d9ba2cc16a191f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-16 04:51:10.990641 | orchestrator | ok: [testbed-node-3] => (item={'id': 'ae9667373f8b9f27524bce2149a5ba203f97fa47b8cf01ccdeaae210d0cf45e6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-16 04:51:10.990655 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5cf99e78e14e5fc3410ab31f693812d11a8dbbc610fc3ceea8f845c588efb23a', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-16 04:51:10.990668 | orchestrator | skipping: [testbed-node-3] => (item={'id': '269983f5e30b9aa011a780b30a18e2dd4768504980a2ba293ae980ce88b2fec5', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-16 04:51:10.990681 | orchestrator | skipping: [testbed-node-3] => (item={'id': '931b643ee9337950911d80e3d0c3270e819c3ab3e967ec68062a3b73f36bb236', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-16 04:51:10.990694 | orchestrator | skipping: [testbed-node-3] => (item={'id': '168119be40456defaa418f07e93a4297effeb8fde73fbd25035b036942f25b41', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-16 04:51:10.990706 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f651cd14f6d34cf87a31745d61a0edde4828e4fbbd3aa4eb7a92c9a6a814497b', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-16 04:51:10.990719 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8dc4dfc5aebaca879d2270d67936900739e98ab45059ca77b4ab438ae64d1c41', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-16 04:51:10.990731 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e620a17d2bb2dc2e34b7c4553c17e2c93ca4fca45d7acd6fca3e4035445a753b', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-16 04:51:10.990752 | orchestrator | skipping: [testbed-node-4] => (item={'id': '069c10cc2c4f7d7d43566eb37c3d43905a84a94b1f8d73eb094b11811c7a01e4', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-16 04:51:11.273507 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e2406c689c02606ceda2fa4396dba000055b9363f9b2a1e0e8ada04395be823b', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-02-16 04:51:11.273653 | orchestrator | skipping: [testbed-node-4] => (item={'id': '29b0d9a6bf4b81c4e4960be4fb3d93623d67cc59a398b254bd3d6d3b7926e495', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-16 04:51:11.273691 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0c9759046e34f5cc0a55546d009341398dc0ba813ca46ec1d6b3070e514f53ba', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-16 04:51:11.273706 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3a35000815df2582b92568bb1583fb884e382c826bed6a01f79b508a3de6eb3b', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-16 04:51:11.273723 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fe0a49afc92d1d639317750bf6f7c14d961fb768f7dedc40b37ff8aebb776be3', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-16 04:51:11.273735 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bc150f9e3f74ea4acbecf62df9676cf09497ae56e16cc32a93cbf2fb25cc1da4', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-16 04:51:11.273747 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7f8bba2432affa07a9c098fa03e685bb4ded07ef8777c88b70556fc413256562', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-16 04:51:11.273760 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0708e7022ecb8229c16a05ec84b27e5a9890bab8a8e449b991256b6bc5adf62b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-16 04:51:11.273773 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ad7ede50e7c7274c848fd24692dbd26577eb148729b218a5c065a9d14594c5b7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-16 04:51:11.273787 | orchestrator | ok: [testbed-node-4] => (item={'id': 'c102a896f7dce1f8d4d2ff83c3f229728703f5df827acb441cb655b259416f60', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-16 04:51:11.273799 | orchestrator | ok: [testbed-node-4] => (item={'id': '7ff9b51ef15d0ae2da864f153fbd6cfff6d82c4c4bdffb019a4173447be72c41', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-16 04:51:11.273811 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4c4d8e019e84964b705a0e0fb0d618e0b5565ba1e03033e0365420ccdca50223', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-16 04:51:11.273823 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f24120e70e82d1b3c9912fa3724ff329b7593f8eb76378d477ca79331895ed01', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-16 04:51:11.273835 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9d50a4850076f5962ad18bb7fc6c723f6de80aa4c32e732219b991b73faa79e1', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-16 04:51:11.273866 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ae8d02f81257185fdb699ab760877ef3dcaf693abcf0bf748720ab2061cecc85', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-16 04:51:11.273886 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ff51d173d115f5b51c7866c4c0334d91ab0d642bba389f2d3f02600c2240fa84', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-16 04:51:11.273898 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bb068d7ad172e3eed9a93abfe8b2a5baa57f6acb914bd7a9f94ecd5f8fd53dfe', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-16 04:51:11.273962 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a0c9eca52cd8408238587ea5f96d05ce732358cf363e58f113ba25e462b0f152', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-16 04:51:11.273973 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8ccddfe1e7f2a2ffc5697a35d0cfd525b3147e53d78d766ff30377b6f89d199b', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-16 04:51:11.273989 | orchestrator | skipping: [testbed-node-5] => (item={'id': '20d30476a224111d846f446d16280811a0f0e6cada949b5e1a2758d40ba853b9', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-02-16 04:51:11.274001 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b36bea118278851f91d4e45dce932d0f45dd861d9238883f60d3f5761f71f054', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-16 04:51:11.274068 | orchestrator | skipping: [testbed-node-5] => (item={'id': '185eb15528524e97ce0cca83d263d2573686afddef790120afaa47bf428a21c1', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-16 04:51:11.274084 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'aee4b91d354aa02d5791ae767e0a95b3a0180a796b21433b99c17728f949eb68', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-16 04:51:11.274097 | orchestrator | skipping: [testbed-node-5] => (item={'id': '27fce376f49d231f2634766b784220f5d0e382a6df4b4623abf0720307d03234', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-16 04:51:11.274110 | orchestrator | skipping: [testbed-node-5] => (item={'id': '58adef27891d4974b746bb99b86bbb4fd28eef8092c8c343860ee2ce2ca1c2b3', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-16 04:51:11.274124 | orchestrator | skipping: [testbed-node-5] => (item={'id': '75423e7ba1dd3437094cd3ec44c1aafc133105edd063d8c46f62d04b673f9177', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-16 04:51:11.274137 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2f7c4c40c3151dcf757452f6a8f1c3b6d9e526a0832ce0c42088f1f51de808da', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-16 04:51:11.274150 | orchestrator | skipping: [testbed-node-5] => (item={'id': '858a38890863f2d8b4100b474878bbb46829519726e45081f3394e826eadbf20', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-16 04:51:11.274171 | orchestrator | ok: [testbed-node-5] => (item={'id': '46e74519daa627608089ab5c8b4ae58661225e0db0998839f0503e9d52f70801', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-16 04:51:11.274193 | orchestrator | ok: [testbed-node-5] => (item={'id': '4e1bff216efccf809dd5f619576e504c73e897723886b9b04962129a4d0a1004', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-16 04:51:22.657795 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6bc020a31aff59a7397589b4e943e6f92f59370d29d8b3d10f35d4fc353f5f0d', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-16 04:51:22.657941 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b795998c584d1129fa109a236294901215767e170be76319b34a9a8ca439a3e6', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-16 04:51:22.657956 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2126fe7b7714085930b84c815ae858bf20b45c0af0a1dafd415332514d17b930', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-16 04:51:22.657965 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a3f27c97b0b38d8daebb5ad43bf793faf3075aff6ad95cacadb55e4c8bc4e590', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-16 04:51:22.657987 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e1207736d3603be2c4d08d9c0ff6c683322fea6abcbede53984c279387afb5be', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-16 04:51:22.657994 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0ad977ae46964b2a516a6f474b1e74e03245678e91e8012f5c0079c15d5f3832', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-16 04:51:22.658001 | orchestrator | 2026-02-16 04:51:22.658011 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-02-16 04:51:22.658063 | orchestrator | Monday 16 February 2026 04:51:11 +0000 (0:00:00.574) 0:00:06.273 ******* 2026-02-16 04:51:22.658070 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:22.658078 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:51:22.658084 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:51:22.658091 | orchestrator | 2026-02-16 04:51:22.658098 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-02-16 04:51:22.658105 | orchestrator | Monday 16 February 2026 04:51:11 +0000 (0:00:00.298) 0:00:06.572 ******* 2026-02-16 04:51:22.658112 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:51:22.658119 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:51:22.658126 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:51:22.658133 | orchestrator | 2026-02-16 04:51:22.658140 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-02-16 04:51:22.658147 | orchestrator | Monday 16 February 2026 04:51:12 +0000 (0:00:00.567) 0:00:07.139 ******* 2026-02-16 04:51:22.658155 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:22.658162 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:51:22.658170 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:51:22.658176 | orchestrator | 2026-02-16 04:51:22.658183 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-16 04:51:22.658190 | orchestrator | Monday 16 February 2026 04:51:12 +0000 (0:00:00.312) 0:00:07.452 ******* 2026-02-16 04:51:22.658197 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:22.658205 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:51:22.658232 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:51:22.658240 | orchestrator | 2026-02-16 04:51:22.658247 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-02-16 04:51:22.658254 | orchestrator | Monday 16 February 2026 04:51:12 +0000 (0:00:00.291) 0:00:07.744 ******* 2026-02-16 04:51:22.658260 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-02-16 04:51:22.658269 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-02-16 04:51:22.658277 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:51:22.658284 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-02-16 04:51:22.658291 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-02-16 04:51:22.658298 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:51:22.658306 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-02-16 04:51:22.658313 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-02-16 04:51:22.658320 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:51:22.658327 | orchestrator | 2026-02-16 04:51:22.658334 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-02-16 04:51:22.658340 | orchestrator | Monday 16 February 2026 04:51:13 +0000 (0:00:00.323) 0:00:08.068 ******* 2026-02-16 04:51:22.658347 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:22.658353 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:51:22.658360 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:51:22.658366 | orchestrator | 2026-02-16 04:51:22.658373 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-16 04:51:22.658380 | orchestrator | Monday 16 February 2026 04:51:13 +0000 (0:00:00.508) 0:00:08.576 ******* 2026-02-16 04:51:22.658395 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:51:22.658420 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:51:22.658428 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:51:22.658435 | orchestrator | 2026-02-16 04:51:22.658442 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-16 04:51:22.658449 | orchestrator | Monday 16 February 2026 04:51:13 +0000 (0:00:00.288) 0:00:08.865 ******* 2026-02-16 04:51:22.658454 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:51:22.658461 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:51:22.658467 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:51:22.658473 | orchestrator | 2026-02-16 04:51:22.658479 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-02-16 04:51:22.658486 | orchestrator | Monday 16 February 2026 04:51:14 +0000 (0:00:00.288) 0:00:09.153 ******* 2026-02-16 04:51:22.658492 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:22.658499 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:51:22.658505 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:51:22.658511 | orchestrator | 2026-02-16 04:51:22.658517 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-16 04:51:22.658523 | orchestrator | Monday 16 February 2026 04:51:14 +0000 (0:00:00.328) 0:00:09.481 ******* 2026-02-16 04:51:22.658529 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:51:22.658535 | orchestrator | 2026-02-16 04:51:22.658541 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-16 04:51:22.658548 | orchestrator | Monday 16 February 2026 04:51:15 +0000 (0:00:00.688) 0:00:10.170 ******* 2026-02-16 04:51:22.658554 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:51:22.658560 | orchestrator | 2026-02-16 04:51:22.658567 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-16 04:51:22.658573 | orchestrator | Monday 16 February 2026 04:51:15 +0000 (0:00:00.264) 0:00:10.434 ******* 2026-02-16 04:51:22.658579 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:51:22.658585 | orchestrator | 2026-02-16 04:51:22.658592 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-16 04:51:22.658605 | orchestrator | Monday 16 February 2026 04:51:15 +0000 (0:00:00.283) 0:00:10.718 ******* 2026-02-16 04:51:22.658612 | orchestrator | 2026-02-16 04:51:22.658618 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-16 04:51:22.658625 | orchestrator | Monday 16 February 2026 04:51:15 +0000 (0:00:00.083) 0:00:10.802 ******* 2026-02-16 04:51:22.658631 | orchestrator | 2026-02-16 04:51:22.658638 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-16 04:51:22.658644 | orchestrator | Monday 16 February 2026 04:51:15 +0000 (0:00:00.086) 0:00:10.888 ******* 2026-02-16 04:51:22.658650 | orchestrator | 2026-02-16 04:51:22.658657 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-16 04:51:22.658663 | orchestrator | Monday 16 February 2026 04:51:15 +0000 (0:00:00.070) 0:00:10.959 ******* 2026-02-16 04:51:22.658670 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:51:22.658675 | orchestrator | 2026-02-16 04:51:22.658682 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-02-16 04:51:22.658689 | orchestrator | Monday 16 February 2026 04:51:16 +0000 (0:00:00.253) 0:00:11.213 ******* 2026-02-16 04:51:22.658695 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:51:22.658701 | orchestrator | 2026-02-16 04:51:22.658708 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-16 04:51:22.658714 | orchestrator | Monday 16 February 2026 04:51:16 +0000 (0:00:00.258) 0:00:11.472 ******* 2026-02-16 04:51:22.658720 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:22.658726 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:51:22.658733 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:51:22.658739 | orchestrator | 2026-02-16 04:51:22.658745 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-02-16 04:51:22.658751 | orchestrator | Monday 16 February 2026 04:51:16 +0000 (0:00:00.297) 0:00:11.769 ******* 2026-02-16 04:51:22.658758 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:22.658764 | orchestrator | 2026-02-16 04:51:22.658771 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-02-16 04:51:22.658778 | orchestrator | Monday 16 February 2026 04:51:17 +0000 (0:00:00.646) 0:00:12.415 ******* 2026-02-16 04:51:22.658784 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-16 04:51:22.658792 | orchestrator | 2026-02-16 04:51:22.658798 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-02-16 04:51:22.658804 | orchestrator | Monday 16 February 2026 04:51:19 +0000 (0:00:01.607) 0:00:14.023 ******* 2026-02-16 04:51:22.658810 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:22.658816 | orchestrator | 2026-02-16 04:51:22.658823 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-02-16 04:51:22.658830 | orchestrator | Monday 16 February 2026 04:51:19 +0000 (0:00:00.133) 0:00:14.156 ******* 2026-02-16 04:51:22.658836 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:22.658843 | orchestrator | 2026-02-16 04:51:22.658847 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-02-16 04:51:22.658851 | orchestrator | Monday 16 February 2026 04:51:19 +0000 (0:00:00.312) 0:00:14.469 ******* 2026-02-16 04:51:22.658855 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:51:22.658859 | orchestrator | 2026-02-16 04:51:22.658863 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-02-16 04:51:22.658867 | orchestrator | Monday 16 February 2026 04:51:19 +0000 (0:00:00.136) 0:00:14.606 ******* 2026-02-16 04:51:22.658871 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:22.658874 | orchestrator | 2026-02-16 04:51:22.658879 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-16 04:51:22.658883 | orchestrator | Monday 16 February 2026 04:51:19 +0000 (0:00:00.123) 0:00:14.730 ******* 2026-02-16 04:51:22.658887 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:22.658914 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:51:22.658921 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:51:22.658935 | orchestrator | 2026-02-16 04:51:22.658942 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-02-16 04:51:22.658949 | orchestrator | Monday 16 February 2026 04:51:20 +0000 (0:00:00.336) 0:00:15.066 ******* 2026-02-16 04:51:22.658956 | orchestrator | changed: [testbed-node-3] 2026-02-16 04:51:22.658962 | orchestrator | changed: [testbed-node-5] 2026-02-16 04:51:22.658969 | orchestrator | changed: [testbed-node-4] 2026-02-16 04:51:32.635402 | orchestrator | 2026-02-16 04:51:32.635554 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-02-16 04:51:32.635583 | orchestrator | Monday 16 February 2026 04:51:22 +0000 (0:00:02.590) 0:00:17.656 ******* 2026-02-16 04:51:32.635603 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:32.635624 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:51:32.635644 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:51:32.635663 | orchestrator | 2026-02-16 04:51:32.635682 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-02-16 04:51:32.635701 | orchestrator | Monday 16 February 2026 04:51:22 +0000 (0:00:00.318) 0:00:17.975 ******* 2026-02-16 04:51:32.635722 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:32.635742 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:51:32.635761 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:51:32.635776 | orchestrator | 2026-02-16 04:51:32.635787 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-02-16 04:51:32.635798 | orchestrator | Monday 16 February 2026 04:51:23 +0000 (0:00:00.497) 0:00:18.473 ******* 2026-02-16 04:51:32.635809 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:51:32.635820 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:51:32.635831 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:51:32.635842 | orchestrator | 2026-02-16 04:51:32.635853 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-02-16 04:51:32.635864 | orchestrator | Monday 16 February 2026 04:51:23 +0000 (0:00:00.293) 0:00:18.767 ******* 2026-02-16 04:51:32.635875 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:32.635886 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:51:32.635922 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:51:32.635933 | orchestrator | 2026-02-16 04:51:32.635946 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-02-16 04:51:32.635965 | orchestrator | Monday 16 February 2026 04:51:24 +0000 (0:00:00.521) 0:00:19.288 ******* 2026-02-16 04:51:32.635979 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:51:32.635991 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:51:32.636003 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:51:32.636016 | orchestrator | 2026-02-16 04:51:32.636029 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-02-16 04:51:32.636042 | orchestrator | Monday 16 February 2026 04:51:24 +0000 (0:00:00.290) 0:00:19.578 ******* 2026-02-16 04:51:32.636066 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:51:32.636078 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:51:32.636091 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:51:32.636103 | orchestrator | 2026-02-16 04:51:32.636116 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-16 04:51:32.636129 | orchestrator | Monday 16 February 2026 04:51:24 +0000 (0:00:00.297) 0:00:19.876 ******* 2026-02-16 04:51:32.636140 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:32.636153 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:51:32.636165 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:51:32.636178 | orchestrator | 2026-02-16 04:51:32.636191 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-02-16 04:51:32.636203 | orchestrator | Monday 16 February 2026 04:51:25 +0000 (0:00:00.494) 0:00:20.371 ******* 2026-02-16 04:51:32.636215 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:32.636228 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:51:32.636240 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:51:32.636252 | orchestrator | 2026-02-16 04:51:32.636264 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-02-16 04:51:32.636302 | orchestrator | Monday 16 February 2026 04:51:26 +0000 (0:00:00.733) 0:00:21.104 ******* 2026-02-16 04:51:32.636314 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:32.636325 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:51:32.636335 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:51:32.636346 | orchestrator | 2026-02-16 04:51:32.636357 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-02-16 04:51:32.636368 | orchestrator | Monday 16 February 2026 04:51:26 +0000 (0:00:00.309) 0:00:21.413 ******* 2026-02-16 04:51:32.636379 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:51:32.636390 | orchestrator | skipping: [testbed-node-4] 2026-02-16 04:51:32.636401 | orchestrator | skipping: [testbed-node-5] 2026-02-16 04:51:32.636412 | orchestrator | 2026-02-16 04:51:32.636423 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-02-16 04:51:32.636434 | orchestrator | Monday 16 February 2026 04:51:26 +0000 (0:00:00.303) 0:00:21.717 ******* 2026-02-16 04:51:32.636445 | orchestrator | ok: [testbed-node-3] 2026-02-16 04:51:32.636456 | orchestrator | ok: [testbed-node-4] 2026-02-16 04:51:32.636466 | orchestrator | ok: [testbed-node-5] 2026-02-16 04:51:32.636477 | orchestrator | 2026-02-16 04:51:32.636488 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-16 04:51:32.636499 | orchestrator | Monday 16 February 2026 04:51:27 +0000 (0:00:00.515) 0:00:22.232 ******* 2026-02-16 04:51:32.636510 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-16 04:51:32.636521 | orchestrator | 2026-02-16 04:51:32.636532 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-16 04:51:32.636543 | orchestrator | Monday 16 February 2026 04:51:27 +0000 (0:00:00.257) 0:00:22.490 ******* 2026-02-16 04:51:32.636554 | orchestrator | skipping: [testbed-node-3] 2026-02-16 04:51:32.636565 | orchestrator | 2026-02-16 04:51:32.636575 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-16 04:51:32.636586 | orchestrator | Monday 16 February 2026 04:51:27 +0000 (0:00:00.249) 0:00:22.739 ******* 2026-02-16 04:51:32.636597 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-16 04:51:32.636608 | orchestrator | 2026-02-16 04:51:32.636619 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-16 04:51:32.636630 | orchestrator | Monday 16 February 2026 04:51:29 +0000 (0:00:01.704) 0:00:24.444 ******* 2026-02-16 04:51:32.636641 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-16 04:51:32.636652 | orchestrator | 2026-02-16 04:51:32.636663 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-16 04:51:32.636674 | orchestrator | Monday 16 February 2026 04:51:29 +0000 (0:00:00.274) 0:00:24.719 ******* 2026-02-16 04:51:32.636685 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-16 04:51:32.636696 | orchestrator | 2026-02-16 04:51:32.636727 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-16 04:51:32.636738 | orchestrator | Monday 16 February 2026 04:51:29 +0000 (0:00:00.254) 0:00:24.973 ******* 2026-02-16 04:51:32.636749 | orchestrator | 2026-02-16 04:51:32.636773 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-16 04:51:32.636784 | orchestrator | Monday 16 February 2026 04:51:30 +0000 (0:00:00.072) 0:00:25.046 ******* 2026-02-16 04:51:32.636806 | orchestrator | 2026-02-16 04:51:32.636817 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-16 04:51:32.636827 | orchestrator | Monday 16 February 2026 04:51:30 +0000 (0:00:00.071) 0:00:25.118 ******* 2026-02-16 04:51:32.636838 | orchestrator | 2026-02-16 04:51:32.636848 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-16 04:51:32.636859 | orchestrator | Monday 16 February 2026 04:51:30 +0000 (0:00:00.076) 0:00:25.195 ******* 2026-02-16 04:51:32.636870 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-16 04:51:32.636880 | orchestrator | 2026-02-16 04:51:32.636941 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-16 04:51:32.636962 | orchestrator | Monday 16 February 2026 04:51:31 +0000 (0:00:01.518) 0:00:26.713 ******* 2026-02-16 04:51:32.636973 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-02-16 04:51:32.636984 | orchestrator |  "msg": [ 2026-02-16 04:51:32.636995 | orchestrator |  "Validator run completed.", 2026-02-16 04:51:32.637006 | orchestrator |  "You can find the report file here:", 2026-02-16 04:51:32.637017 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-02-16T04:51:06+00:00-report.json", 2026-02-16 04:51:32.637047 | orchestrator |  "on the following host:", 2026-02-16 04:51:32.637058 | orchestrator |  "testbed-manager" 2026-02-16 04:51:32.637070 | orchestrator |  ] 2026-02-16 04:51:32.637081 | orchestrator | } 2026-02-16 04:51:32.637092 | orchestrator | 2026-02-16 04:51:32.637103 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:51:32.637116 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-16 04:51:32.637128 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-16 04:51:32.637139 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-16 04:51:32.637150 | orchestrator | 2026-02-16 04:51:32.637161 | orchestrator | 2026-02-16 04:51:32.637172 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:51:32.637183 | orchestrator | Monday 16 February 2026 04:51:32 +0000 (0:00:00.622) 0:00:27.336 ******* 2026-02-16 04:51:32.637194 | orchestrator | =============================================================================== 2026-02-16 04:51:32.637205 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.59s 2026-02-16 04:51:32.637215 | orchestrator | Get timestamp for report file ------------------------------------------- 1.84s 2026-02-16 04:51:32.637226 | orchestrator | Aggregate test results step one ----------------------------------------- 1.70s 2026-02-16 04:51:32.637237 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.61s 2026-02-16 04:51:32.637248 | orchestrator | Write report file ------------------------------------------------------- 1.52s 2026-02-16 04:51:32.637258 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.80s 2026-02-16 04:51:32.637269 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.73s 2026-02-16 04:51:32.637280 | orchestrator | Create report output directory ------------------------------------------ 0.73s 2026-02-16 04:51:32.637291 | orchestrator | Aggregate test results step one ----------------------------------------- 0.69s 2026-02-16 04:51:32.637301 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.65s 2026-02-16 04:51:32.637312 | orchestrator | Print report file information ------------------------------------------- 0.62s 2026-02-16 04:51:32.637323 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.57s 2026-02-16 04:51:32.637334 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.57s 2026-02-16 04:51:32.637344 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.52s 2026-02-16 04:51:32.637355 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.52s 2026-02-16 04:51:32.637366 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.51s 2026-02-16 04:51:32.637377 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.51s 2026-02-16 04:51:32.637387 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.50s 2026-02-16 04:51:32.637398 | orchestrator | Prepare test data ------------------------------------------------------- 0.49s 2026-02-16 04:51:32.637409 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2026-02-16 04:51:32.935372 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-02-16 04:51:32.941322 | orchestrator | + set -e 2026-02-16 04:51:32.941395 | orchestrator | + source /opt/manager-vars.sh 2026-02-16 04:51:32.941407 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-16 04:51:32.941417 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-16 04:51:32.941427 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-16 04:51:32.941805 | orchestrator | ++ CEPH_VERSION=reef 2026-02-16 04:51:32.941825 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-16 04:51:32.941836 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-16 04:51:32.941845 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-16 04:51:32.941855 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-16 04:51:32.941865 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-16 04:51:32.941875 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-16 04:51:32.941884 | orchestrator | ++ export ARA=false 2026-02-16 04:51:32.941925 | orchestrator | ++ ARA=false 2026-02-16 04:51:32.941935 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-16 04:51:32.941945 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-16 04:51:32.941954 | orchestrator | ++ export TEMPEST=false 2026-02-16 04:51:32.941964 | orchestrator | ++ TEMPEST=false 2026-02-16 04:51:32.941973 | orchestrator | ++ export IS_ZUUL=true 2026-02-16 04:51:32.941983 | orchestrator | ++ IS_ZUUL=true 2026-02-16 04:51:32.941993 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 04:51:32.942003 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 04:51:32.942012 | orchestrator | ++ export EXTERNAL_API=false 2026-02-16 04:51:32.942066 | orchestrator | ++ EXTERNAL_API=false 2026-02-16 04:51:32.942076 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-16 04:51:32.942086 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-16 04:51:32.942096 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-16 04:51:32.942105 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-16 04:51:32.942115 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-16 04:51:32.942124 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-16 04:51:32.942134 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-16 04:51:32.942144 | orchestrator | + source /etc/os-release 2026-02-16 04:51:32.942153 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-02-16 04:51:32.942163 | orchestrator | ++ NAME=Ubuntu 2026-02-16 04:51:32.942173 | orchestrator | ++ VERSION_ID=24.04 2026-02-16 04:51:32.942182 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-02-16 04:51:32.942192 | orchestrator | ++ VERSION_CODENAME=noble 2026-02-16 04:51:32.942202 | orchestrator | ++ ID=ubuntu 2026-02-16 04:51:32.942211 | orchestrator | ++ ID_LIKE=debian 2026-02-16 04:51:32.942221 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-02-16 04:51:32.942230 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-02-16 04:51:32.942240 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-02-16 04:51:32.942250 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-02-16 04:51:32.942261 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-02-16 04:51:32.942270 | orchestrator | ++ LOGO=ubuntu-logo 2026-02-16 04:51:32.942280 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-02-16 04:51:32.942290 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-02-16 04:51:32.942301 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-16 04:51:32.971730 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-16 04:51:55.874578 | orchestrator | 2026-02-16 04:51:55.874694 | orchestrator | # Status of Elasticsearch 2026-02-16 04:51:55.874711 | orchestrator | 2026-02-16 04:51:55.874722 | orchestrator | + pushd /opt/configuration/contrib 2026-02-16 04:51:55.874734 | orchestrator | + echo 2026-02-16 04:51:55.874744 | orchestrator | + echo '# Status of Elasticsearch' 2026-02-16 04:51:55.874754 | orchestrator | + echo 2026-02-16 04:51:55.874764 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-02-16 04:51:56.049330 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-02-16 04:51:56.049433 | orchestrator | 2026-02-16 04:51:56.049453 | orchestrator | # Status of MariaDB 2026-02-16 04:51:56.049470 | orchestrator | 2026-02-16 04:51:56.049485 | orchestrator | + echo 2026-02-16 04:51:56.049535 | orchestrator | + echo '# Status of MariaDB' 2026-02-16 04:51:56.049551 | orchestrator | + echo 2026-02-16 04:51:56.049579 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-16 04:51:56.088279 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-16 04:51:56.088376 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-16 04:51:56.088392 | orchestrator | + MARIADB_USER=root_shard_0 2026-02-16 04:51:56.088405 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-02-16 04:51:56.152733 | orchestrator | Reading package lists... 2026-02-16 04:51:56.483425 | orchestrator | Building dependency tree... 2026-02-16 04:51:56.483691 | orchestrator | Reading state information... 2026-02-16 04:51:56.871385 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-02-16 04:51:56.872505 | orchestrator | bc set to manually installed. 2026-02-16 04:51:56.872581 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-02-16 04:51:57.604318 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-02-16 04:51:57.605538 | orchestrator | 2026-02-16 04:51:57.605598 | orchestrator | # Status of Prometheus 2026-02-16 04:51:57.605615 | orchestrator | 2026-02-16 04:51:57.605628 | orchestrator | + echo 2026-02-16 04:51:57.605640 | orchestrator | + echo '# Status of Prometheus' 2026-02-16 04:51:57.605652 | orchestrator | + echo 2026-02-16 04:51:57.605664 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-02-16 04:51:57.667516 | orchestrator | Unauthorized 2026-02-16 04:51:57.671332 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-02-16 04:51:57.733142 | orchestrator | Unauthorized 2026-02-16 04:51:57.735282 | orchestrator | 2026-02-16 04:51:57.735324 | orchestrator | # Status of RabbitMQ 2026-02-16 04:51:57.735332 | orchestrator | 2026-02-16 04:51:57.735338 | orchestrator | + echo 2026-02-16 04:51:57.735344 | orchestrator | + echo '# Status of RabbitMQ' 2026-02-16 04:51:57.735350 | orchestrator | + echo 2026-02-16 04:51:57.735970 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-16 04:51:57.795480 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-16 04:51:57.795576 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-16 04:51:57.795592 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-02-16 04:51:58.290462 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-02-16 04:51:58.299698 | orchestrator | 2026-02-16 04:51:58.299767 | orchestrator | # Status of Redis 2026-02-16 04:51:58.299777 | orchestrator | 2026-02-16 04:51:58.299784 | orchestrator | + echo 2026-02-16 04:51:58.299791 | orchestrator | + echo '# Status of Redis' 2026-02-16 04:51:58.299797 | orchestrator | + echo 2026-02-16 04:51:58.299805 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-02-16 04:51:58.304749 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002072s;;;0.000000;10.000000 2026-02-16 04:51:58.305033 | orchestrator | 2026-02-16 04:51:58.305122 | orchestrator | + popd 2026-02-16 04:51:58.305132 | orchestrator | + echo 2026-02-16 04:51:58.305142 | orchestrator | # Create backup of MariaDB database 2026-02-16 04:51:58.305150 | orchestrator | 2026-02-16 04:51:58.305157 | orchestrator | + echo '# Create backup of MariaDB database' 2026-02-16 04:51:58.305164 | orchestrator | + echo 2026-02-16 04:51:58.305171 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-02-16 04:52:00.535327 | orchestrator | 2026-02-16 04:52:00 | INFO  | Task 283b63eb-aec8-49cc-bdc8-f6c87a414157 (mariadb_backup) was prepared for execution. 2026-02-16 04:52:00.535416 | orchestrator | 2026-02-16 04:52:00 | INFO  | It takes a moment until task 283b63eb-aec8-49cc-bdc8-f6c87a414157 (mariadb_backup) has been started and output is visible here. 2026-02-16 04:53:58.514128 | orchestrator | 2026-02-16 04:53:58.514237 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 04:53:58.514255 | orchestrator | 2026-02-16 04:53:58.514269 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 04:53:58.514281 | orchestrator | Monday 16 February 2026 04:52:04 +0000 (0:00:00.169) 0:00:00.169 ******* 2026-02-16 04:53:58.514293 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:53:58.514306 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:53:58.514317 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:53:58.514329 | orchestrator | 2026-02-16 04:53:58.514363 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 04:53:58.514376 | orchestrator | Monday 16 February 2026 04:52:04 +0000 (0:00:00.340) 0:00:00.509 ******* 2026-02-16 04:53:58.514387 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-16 04:53:58.514399 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-16 04:53:58.514410 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-16 04:53:58.514421 | orchestrator | 2026-02-16 04:53:58.514432 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-16 04:53:58.514444 | orchestrator | 2026-02-16 04:53:58.514455 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-16 04:53:58.514467 | orchestrator | Monday 16 February 2026 04:52:05 +0000 (0:00:00.580) 0:00:01.089 ******* 2026-02-16 04:53:58.514481 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 04:53:58.514500 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-16 04:53:58.514518 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-16 04:53:58.514536 | orchestrator | 2026-02-16 04:53:58.514553 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-16 04:53:58.514571 | orchestrator | Monday 16 February 2026 04:52:06 +0000 (0:00:00.468) 0:00:01.558 ******* 2026-02-16 04:53:58.514589 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 04:53:58.514608 | orchestrator | 2026-02-16 04:53:58.514677 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-02-16 04:53:58.514714 | orchestrator | Monday 16 February 2026 04:52:06 +0000 (0:00:00.533) 0:00:02.091 ******* 2026-02-16 04:53:58.514732 | orchestrator | ok: [testbed-node-1] 2026-02-16 04:53:58.514748 | orchestrator | ok: [testbed-node-2] 2026-02-16 04:53:58.514765 | orchestrator | ok: [testbed-node-0] 2026-02-16 04:53:58.514781 | orchestrator | 2026-02-16 04:53:58.514798 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-02-16 04:53:58.514849 | orchestrator | Monday 16 February 2026 04:52:09 +0000 (0:00:03.079) 0:00:05.170 ******* 2026-02-16 04:53:58.514867 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-16 04:53:58.514885 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-16 04:53:58.514903 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-16 04:53:58.514921 | orchestrator | mariadb_bootstrap_restart 2026-02-16 04:53:58.514938 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:53:58.514956 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:53:58.514972 | orchestrator | changed: [testbed-node-0] 2026-02-16 04:53:58.514986 | orchestrator | 2026-02-16 04:53:58.515003 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-16 04:53:58.515021 | orchestrator | skipping: no hosts matched 2026-02-16 04:53:58.515038 | orchestrator | 2026-02-16 04:53:58.515057 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-16 04:53:58.515074 | orchestrator | skipping: no hosts matched 2026-02-16 04:53:58.515090 | orchestrator | 2026-02-16 04:53:58.515106 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-16 04:53:58.515123 | orchestrator | skipping: no hosts matched 2026-02-16 04:53:58.515141 | orchestrator | 2026-02-16 04:53:58.515158 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-16 04:53:58.515174 | orchestrator | 2026-02-16 04:53:58.515192 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-16 04:53:58.515209 | orchestrator | Monday 16 February 2026 04:53:57 +0000 (0:01:47.795) 0:01:52.966 ******* 2026-02-16 04:53:58.515226 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:53:58.515243 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:53:58.515261 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:53:58.515279 | orchestrator | 2026-02-16 04:53:58.515297 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-16 04:53:58.515333 | orchestrator | Monday 16 February 2026 04:53:57 +0000 (0:00:00.307) 0:01:53.274 ******* 2026-02-16 04:53:58.515353 | orchestrator | skipping: [testbed-node-0] 2026-02-16 04:53:58.515370 | orchestrator | skipping: [testbed-node-1] 2026-02-16 04:53:58.515388 | orchestrator | skipping: [testbed-node-2] 2026-02-16 04:53:58.515406 | orchestrator | 2026-02-16 04:53:58.515423 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:53:58.515443 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 04:53:58.515463 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-16 04:53:58.515482 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-16 04:53:58.515502 | orchestrator | 2026-02-16 04:53:58.515520 | orchestrator | 2026-02-16 04:53:58.515539 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:53:58.515557 | orchestrator | Monday 16 February 2026 04:53:58 +0000 (0:00:00.425) 0:01:53.699 ******* 2026-02-16 04:53:58.515576 | orchestrator | =============================================================================== 2026-02-16 04:53:58.515595 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 107.80s 2026-02-16 04:53:58.515639 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.08s 2026-02-16 04:53:58.515652 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2026-02-16 04:53:58.515664 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.53s 2026-02-16 04:53:58.515674 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.47s 2026-02-16 04:53:58.515685 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.43s 2026-02-16 04:53:58.515696 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-02-16 04:53:58.515707 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2026-02-16 04:53:58.825683 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-02-16 04:53:58.835197 | orchestrator | + set -e 2026-02-16 04:53:58.835297 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-16 04:53:58.835313 | orchestrator | ++ export INTERACTIVE=false 2026-02-16 04:53:58.835325 | orchestrator | ++ INTERACTIVE=false 2026-02-16 04:53:58.835337 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-16 04:53:58.835347 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-16 04:53:58.835359 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-16 04:53:58.836485 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-16 04:53:58.843529 | orchestrator | 2026-02-16 04:53:58.843593 | orchestrator | # OpenStack endpoints 2026-02-16 04:53:58.843606 | orchestrator | 2026-02-16 04:53:58.843618 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-16 04:53:58.843629 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-16 04:53:58.843640 | orchestrator | + export OS_CLOUD=admin 2026-02-16 04:53:58.843651 | orchestrator | + OS_CLOUD=admin 2026-02-16 04:53:58.843662 | orchestrator | + echo 2026-02-16 04:53:58.843673 | orchestrator | + echo '# OpenStack endpoints' 2026-02-16 04:53:58.843684 | orchestrator | + echo 2026-02-16 04:53:58.843695 | orchestrator | + openstack endpoint list 2026-02-16 04:54:02.134172 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-16 04:54:02.134263 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-02-16 04:54:02.134276 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-16 04:54:02.134308 | orchestrator | | 02e718d4afc7453b8440529a76d14386 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-16 04:54:02.134332 | orchestrator | | 0e0717c92ad3428c8fd070a8b00c8db6 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-16 04:54:02.134342 | orchestrator | | 139f903f4e094159a70b1d7363ed6637 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-02-16 04:54:02.134351 | orchestrator | | 2286ebf294714da4b5ad60bdf9fe01fc | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-02-16 04:54:02.134360 | orchestrator | | 3b5e533a08dc45ecaaa89f9d9df2aa1a | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-02-16 04:54:02.134370 | orchestrator | | 3ba3ebe743324da48fa149708d6633da | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-02-16 04:54:02.134378 | orchestrator | | 44a600b93a9d4d7a9531ec831351e73d | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-02-16 04:54:02.134387 | orchestrator | | 481ff52342b349f4b1800f78881ba65e | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-02-16 04:54:02.134396 | orchestrator | | 48f7012e615a425caa73a9c5ab29272e | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-02-16 04:54:02.134404 | orchestrator | | 50523989d1d945e58060747320344eab | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-02-16 04:54:02.134413 | orchestrator | | 5cb60ab234554363936c063bbe650a1a | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-02-16 04:54:02.134422 | orchestrator | | 5e09a1e74edc4a0a9ccced7869cde72f | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-02-16 04:54:02.134431 | orchestrator | | 689b90ddd3dc4bfcb19dbfe7c811dd08 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-02-16 04:54:02.134439 | orchestrator | | 7018cef52a2647cb95b20d1a858d207b | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-02-16 04:54:02.134448 | orchestrator | | 7d73e78762d54b37a6c170fcd4c1e865 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-02-16 04:54:02.134457 | orchestrator | | 993b36b489a4408e976fec09afeac0f3 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-02-16 04:54:02.134465 | orchestrator | | 9b96f72d11054a5eba1f7c56742fc528 | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-02-16 04:54:02.134474 | orchestrator | | a2f4ea1386aa45dab230515219aae242 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-02-16 04:54:02.134482 | orchestrator | | a818f4cccefd4174bd949ffa74142a3f | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-16 04:54:02.134491 | orchestrator | | bf8b556edb504496ade606bc595a4b29 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-02-16 04:54:02.134515 | orchestrator | | cc6de6c5c7c442df95c013daa3ed1dc6 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-16 04:54:02.134545 | orchestrator | | d03cb96c7240425f81503c952676d8e1 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-02-16 04:54:02.134558 | orchestrator | | d1bc6adf286d4877ac3d4d85f96e7941 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-02-16 04:54:02.134567 | orchestrator | | d5c6520dd99943f99167dc0e9d693373 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-02-16 04:54:02.134575 | orchestrator | | d63b9b4ef7a5475a850178c4cd86c991 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-16 04:54:02.134584 | orchestrator | | d99dd9ceb0c94ab0b7e9f75dc10b0cbf | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-02-16 04:54:02.134592 | orchestrator | | df299c2e91574caaa27d4d2f80b1ad27 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-16 04:54:02.134601 | orchestrator | | e03d3867387246cdb1d711f541d4a89e | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-02-16 04:54:02.134610 | orchestrator | | e7a069be96d9411695a0f71c399cec76 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-02-16 04:54:02.134618 | orchestrator | | ec8e2df87b7b42bea7a418fdb266c219 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-02-16 04:54:02.134628 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-16 04:54:02.391628 | orchestrator | 2026-02-16 04:54:02.391727 | orchestrator | # Cinder 2026-02-16 04:54:02.391742 | orchestrator | 2026-02-16 04:54:02.391753 | orchestrator | + echo 2026-02-16 04:54:02.391765 | orchestrator | + echo '# Cinder' 2026-02-16 04:54:02.391776 | orchestrator | + echo 2026-02-16 04:54:02.391787 | orchestrator | + openstack volume service list 2026-02-16 04:54:04.979659 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-16 04:54:04.979733 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-02-16 04:54:04.979742 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-16 04:54:04.979749 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-16T04:53:57.000000 | 2026-02-16 04:54:04.979757 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-16T04:53:56.000000 | 2026-02-16 04:54:04.979763 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-16T04:53:57.000000 | 2026-02-16 04:54:04.979770 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-02-16T04:53:56.000000 | 2026-02-16 04:54:04.979777 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-02-16T04:54:03.000000 | 2026-02-16 04:54:04.979784 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-02-16T04:54:03.000000 | 2026-02-16 04:54:04.979791 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-02-16T04:54:00.000000 | 2026-02-16 04:54:04.979797 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-02-16T04:54:01.000000 | 2026-02-16 04:54:04.979824 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-02-16T04:54:02.000000 | 2026-02-16 04:54:04.979852 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-16 04:54:05.245231 | orchestrator | 2026-02-16 04:54:05.245340 | orchestrator | # Neutron 2026-02-16 04:54:05.245365 | orchestrator | 2026-02-16 04:54:05.245385 | orchestrator | + echo 2026-02-16 04:54:05.245404 | orchestrator | + echo '# Neutron' 2026-02-16 04:54:05.245424 | orchestrator | + echo 2026-02-16 04:54:05.245444 | orchestrator | + openstack network agent list 2026-02-16 04:54:07.841995 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-16 04:54:07.842127 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-02-16 04:54:07.842138 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-16 04:54:07.842146 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-02-16 04:54:07.842153 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-02-16 04:54:07.842161 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-02-16 04:54:07.842168 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-02-16 04:54:07.842190 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-02-16 04:54:07.842198 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-02-16 04:54:07.842205 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-16 04:54:07.842212 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-16 04:54:07.842220 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-16 04:54:07.842227 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-16 04:54:08.099137 | orchestrator | + openstack network service provider list 2026-02-16 04:54:10.647582 | orchestrator | +---------------+------+---------+ 2026-02-16 04:54:10.647687 | orchestrator | | Service Type | Name | Default | 2026-02-16 04:54:10.647706 | orchestrator | +---------------+------+---------+ 2026-02-16 04:54:10.647719 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-02-16 04:54:10.647734 | orchestrator | +---------------+------+---------+ 2026-02-16 04:54:10.913765 | orchestrator | 2026-02-16 04:54:10.913906 | orchestrator | # Nova 2026-02-16 04:54:10.913921 | orchestrator | 2026-02-16 04:54:10.913932 | orchestrator | + echo 2026-02-16 04:54:10.913942 | orchestrator | + echo '# Nova' 2026-02-16 04:54:10.913952 | orchestrator | + echo 2026-02-16 04:54:10.913962 | orchestrator | + openstack compute service list 2026-02-16 04:54:13.749281 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-16 04:54:13.749367 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-02-16 04:54:13.749380 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-16 04:54:13.749389 | orchestrator | | 35d0f510-dadb-422b-b47a-b7c7345a4f02 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-16T04:54:10.000000 | 2026-02-16 04:54:13.749423 | orchestrator | | 48da6471-bc09-4e8d-b861-fc3b626e2e6c | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-16T04:54:05.000000 | 2026-02-16 04:54:13.749432 | orchestrator | | 5a3c04b2-ae88-4525-958d-18c476807c9d | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-16T04:54:05.000000 | 2026-02-16 04:54:13.749441 | orchestrator | | c5fd968c-935c-4074-880a-d783f3464481 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-02-16T04:54:11.000000 | 2026-02-16 04:54:13.749450 | orchestrator | | 76b2dc47-0c51-4754-9d2c-55e64ca791f9 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-02-16T04:54:11.000000 | 2026-02-16 04:54:13.749459 | orchestrator | | 0855cc54-4664-400e-a8be-4f190b5a2d1d | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-02-16T04:54:12.000000 | 2026-02-16 04:54:13.749467 | orchestrator | | 6d42aa1d-207a-498c-ab40-ce3b8f0d5a09 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-02-16T04:54:08.000000 | 2026-02-16 04:54:13.749476 | orchestrator | | 69a5dadb-71b3-4cdb-8c2c-1505fee23824 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-02-16T04:54:08.000000 | 2026-02-16 04:54:13.749484 | orchestrator | | fa261d5e-0d95-42ed-8896-a9e38ad1f797 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-02-16T04:54:09.000000 | 2026-02-16 04:54:13.749493 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-16 04:54:14.012243 | orchestrator | + openstack hypervisor list 2026-02-16 04:54:16.654893 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-16 04:54:16.655013 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-02-16 04:54:16.655036 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-16 04:54:16.655055 | orchestrator | | 5750a380-6878-4e8f-82bb-3c46b6eed7ef | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-02-16 04:54:16.655073 | orchestrator | | c9452c7f-4c00-4d69-810b-6c4bd11f1228 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-02-16 04:54:16.655089 | orchestrator | | ca4687dd-7ce0-4f14-862c-f4a83ce12bcf | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-02-16 04:54:16.655106 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-16 04:54:16.918444 | orchestrator | 2026-02-16 04:54:16.918590 | orchestrator | # Run OpenStack test play 2026-02-16 04:54:16.918616 | orchestrator | 2026-02-16 04:54:16.918641 | orchestrator | + echo 2026-02-16 04:54:16.918663 | orchestrator | + echo '# Run OpenStack test play' 2026-02-16 04:54:16.918683 | orchestrator | + echo 2026-02-16 04:54:16.918701 | orchestrator | + osism apply --environment openstack test 2026-02-16 04:54:18.839313 | orchestrator | 2026-02-16 04:54:18 | INFO  | Trying to run play test in environment openstack 2026-02-16 04:54:28.933700 | orchestrator | 2026-02-16 04:54:28 | INFO  | Task 946198ca-b609-4e16-a9bc-367235f0d31a (test) was prepared for execution. 2026-02-16 04:54:28.933818 | orchestrator | 2026-02-16 04:54:28 | INFO  | It takes a moment until task 946198ca-b609-4e16-a9bc-367235f0d31a (test) has been started and output is visible here. 2026-02-16 04:57:16.378838 | orchestrator | 2026-02-16 04:57:16.378985 | orchestrator | PLAY [Create test project] ***************************************************** 2026-02-16 04:57:16.379017 | orchestrator | 2026-02-16 04:57:16.379069 | orchestrator | TASK [Create test domain] ****************************************************** 2026-02-16 04:57:16.379083 | orchestrator | Monday 16 February 2026 04:54:33 +0000 (0:00:00.076) 0:00:00.076 ******* 2026-02-16 04:57:16.379095 | orchestrator | changed: [localhost] 2026-02-16 04:57:16.379107 | orchestrator | 2026-02-16 04:57:16.379119 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-02-16 04:57:16.379130 | orchestrator | Monday 16 February 2026 04:54:36 +0000 (0:00:03.601) 0:00:03.677 ******* 2026-02-16 04:57:16.379141 | orchestrator | changed: [localhost] 2026-02-16 04:57:16.379152 | orchestrator | 2026-02-16 04:57:16.379187 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-02-16 04:57:16.379198 | orchestrator | Monday 16 February 2026 04:54:40 +0000 (0:00:04.240) 0:00:07.918 ******* 2026-02-16 04:57:16.379209 | orchestrator | changed: [localhost] 2026-02-16 04:57:16.379222 | orchestrator | 2026-02-16 04:57:16.379242 | orchestrator | TASK [Create test project] ***************************************************** 2026-02-16 04:57:16.379263 | orchestrator | Monday 16 February 2026 04:54:47 +0000 (0:00:06.534) 0:00:14.452 ******* 2026-02-16 04:57:16.379281 | orchestrator | changed: [localhost] 2026-02-16 04:57:16.379300 | orchestrator | 2026-02-16 04:57:16.379319 | orchestrator | TASK [Create test user] ******************************************************** 2026-02-16 04:57:16.379339 | orchestrator | Monday 16 February 2026 04:54:51 +0000 (0:00:04.091) 0:00:18.544 ******* 2026-02-16 04:57:16.379360 | orchestrator | changed: [localhost] 2026-02-16 04:57:16.379380 | orchestrator | 2026-02-16 04:57:16.379401 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-02-16 04:57:16.379423 | orchestrator | Monday 16 February 2026 04:54:55 +0000 (0:00:04.175) 0:00:22.719 ******* 2026-02-16 04:57:16.379479 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-02-16 04:57:16.379497 | orchestrator | changed: [localhost] => (item=member) 2026-02-16 04:57:16.379510 | orchestrator | changed: [localhost] => (item=creator) 2026-02-16 04:57:16.379523 | orchestrator | 2026-02-16 04:57:16.379535 | orchestrator | TASK [Create test server group] ************************************************ 2026-02-16 04:57:16.379549 | orchestrator | Monday 16 February 2026 04:55:07 +0000 (0:00:11.654) 0:00:34.374 ******* 2026-02-16 04:57:16.379561 | orchestrator | changed: [localhost] 2026-02-16 04:57:16.379574 | orchestrator | 2026-02-16 04:57:16.379587 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-02-16 04:57:16.379600 | orchestrator | Monday 16 February 2026 04:55:12 +0000 (0:00:04.965) 0:00:39.340 ******* 2026-02-16 04:57:16.379613 | orchestrator | changed: [localhost] 2026-02-16 04:57:16.379625 | orchestrator | 2026-02-16 04:57:16.379638 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-02-16 04:57:16.379650 | orchestrator | Monday 16 February 2026 04:55:17 +0000 (0:00:05.282) 0:00:44.622 ******* 2026-02-16 04:57:16.379663 | orchestrator | changed: [localhost] 2026-02-16 04:57:16.379676 | orchestrator | 2026-02-16 04:57:16.379688 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-02-16 04:57:16.379701 | orchestrator | Monday 16 February 2026 04:55:21 +0000 (0:00:04.326) 0:00:48.949 ******* 2026-02-16 04:57:16.379714 | orchestrator | changed: [localhost] 2026-02-16 04:57:16.379727 | orchestrator | 2026-02-16 04:57:16.379738 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-02-16 04:57:16.379749 | orchestrator | Monday 16 February 2026 04:55:25 +0000 (0:00:03.851) 0:00:52.801 ******* 2026-02-16 04:57:16.379760 | orchestrator | changed: [localhost] 2026-02-16 04:57:16.379770 | orchestrator | 2026-02-16 04:57:16.379780 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-02-16 04:57:16.379789 | orchestrator | Monday 16 February 2026 04:55:29 +0000 (0:00:04.110) 0:00:56.912 ******* 2026-02-16 04:57:16.379799 | orchestrator | changed: [localhost] 2026-02-16 04:57:16.379808 | orchestrator | 2026-02-16 04:57:16.379818 | orchestrator | TASK [Create test network] ***************************************************** 2026-02-16 04:57:16.379828 | orchestrator | Monday 16 February 2026 04:55:33 +0000 (0:00:03.832) 0:01:00.745 ******* 2026-02-16 04:57:16.379838 | orchestrator | changed: [localhost] 2026-02-16 04:57:16.379847 | orchestrator | 2026-02-16 04:57:16.379857 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-02-16 04:57:16.379867 | orchestrator | Monday 16 February 2026 04:55:38 +0000 (0:00:04.777) 0:01:05.523 ******* 2026-02-16 04:57:16.379877 | orchestrator | changed: [localhost] 2026-02-16 04:57:16.379886 | orchestrator | 2026-02-16 04:57:16.379896 | orchestrator | TASK [Create test router] ****************************************************** 2026-02-16 04:57:16.379905 | orchestrator | Monday 16 February 2026 04:55:43 +0000 (0:00:05.408) 0:01:10.931 ******* 2026-02-16 04:57:16.379926 | orchestrator | changed: [localhost] 2026-02-16 04:57:16.379935 | orchestrator | 2026-02-16 04:57:16.379945 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-02-16 04:57:16.379954 | orchestrator | 2026-02-16 04:57:16.379964 | orchestrator | TASK [Get test server group] *************************************************** 2026-02-16 04:57:16.379974 | orchestrator | Monday 16 February 2026 04:55:55 +0000 (0:00:11.660) 0:01:22.591 ******* 2026-02-16 04:57:16.379983 | orchestrator | ok: [localhost] 2026-02-16 04:57:16.379993 | orchestrator | 2026-02-16 04:57:16.380003 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-02-16 04:57:16.380013 | orchestrator | Monday 16 February 2026 04:55:59 +0000 (0:00:04.147) 0:01:26.738 ******* 2026-02-16 04:57:16.380022 | orchestrator | skipping: [localhost] 2026-02-16 04:57:16.380032 | orchestrator | 2026-02-16 04:57:16.380042 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-02-16 04:57:16.380051 | orchestrator | Monday 16 February 2026 04:55:59 +0000 (0:00:00.037) 0:01:26.776 ******* 2026-02-16 04:57:16.380061 | orchestrator | skipping: [localhost] 2026-02-16 04:57:16.380071 | orchestrator | 2026-02-16 04:57:16.380080 | orchestrator | TASK [Delete test instances] *************************************************** 2026-02-16 04:57:16.380090 | orchestrator | Monday 16 February 2026 04:55:59 +0000 (0:00:00.054) 0:01:26.831 ******* 2026-02-16 04:57:16.380115 | orchestrator | skipping: [localhost] => (item=test-4)  2026-02-16 04:57:16.380125 | orchestrator | skipping: [localhost] => (item=test-3)  2026-02-16 04:57:16.380156 | orchestrator | skipping: [localhost] => (item=test-2)  2026-02-16 04:57:16.380166 | orchestrator | skipping: [localhost] => (item=test-1)  2026-02-16 04:57:16.380176 | orchestrator | skipping: [localhost] => (item=test)  2026-02-16 04:57:16.380185 | orchestrator | skipping: [localhost] 2026-02-16 04:57:16.380195 | orchestrator | 2026-02-16 04:57:16.380205 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-02-16 04:57:16.380221 | orchestrator | Monday 16 February 2026 04:56:00 +0000 (0:00:00.164) 0:01:26.995 ******* 2026-02-16 04:57:16.380237 | orchestrator | skipping: [localhost] 2026-02-16 04:57:16.380252 | orchestrator | 2026-02-16 04:57:16.380268 | orchestrator | TASK [Create test instances] *************************************************** 2026-02-16 04:57:16.380285 | orchestrator | Monday 16 February 2026 04:56:00 +0000 (0:00:00.177) 0:01:27.173 ******* 2026-02-16 04:57:16.380302 | orchestrator | changed: [localhost] => (item=test) 2026-02-16 04:57:16.380319 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-16 04:57:16.380331 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-16 04:57:16.380340 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-16 04:57:16.380350 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-16 04:57:16.380359 | orchestrator | 2026-02-16 04:57:16.380369 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-02-16 04:57:16.380381 | orchestrator | Monday 16 February 2026 04:56:05 +0000 (0:00:04.849) 0:01:32.022 ******* 2026-02-16 04:57:16.380398 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-02-16 04:57:16.380415 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-02-16 04:57:16.380431 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-02-16 04:57:16.380472 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-02-16 04:57:16.380492 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j340426475571.3747', 'results_file': '/ansible/.ansible_async/j340426475571.3747', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-16 04:57:16.380512 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j488376629380.3772', 'results_file': '/ansible/.ansible_async/j488376629380.3772', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-16 04:57:16.380542 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-02-16 04:57:16.380558 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j593875354347.3797', 'results_file': '/ansible/.ansible_async/j593875354347.3797', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-16 04:57:16.380575 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j286408684975.3822', 'results_file': '/ansible/.ansible_async/j286408684975.3822', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-16 04:57:16.380591 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j480197038003.3847', 'results_file': '/ansible/.ansible_async/j480197038003.3847', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-16 04:57:16.380608 | orchestrator | 2026-02-16 04:57:16.380625 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-02-16 04:57:16.380655 | orchestrator | Monday 16 February 2026 04:57:02 +0000 (0:00:57.383) 0:02:29.406 ******* 2026-02-16 04:57:16.380673 | orchestrator | changed: [localhost] => (item=test) 2026-02-16 04:57:16.380690 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-16 04:57:16.380706 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-16 04:57:16.380716 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-16 04:57:16.380726 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-16 04:57:16.380735 | orchestrator | 2026-02-16 04:57:16.380745 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-02-16 04:57:16.380754 | orchestrator | Monday 16 February 2026 04:57:06 +0000 (0:00:04.466) 0:02:33.873 ******* 2026-02-16 04:57:16.380764 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-02-16 04:57:16.380775 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j893962125034.3958', 'results_file': '/ansible/.ansible_async/j893962125034.3958', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-16 04:57:16.380785 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j990591834338.3983', 'results_file': '/ansible/.ansible_async/j990591834338.3983', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-16 04:57:16.380796 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j748339545774.4008', 'results_file': '/ansible/.ansible_async/j748339545774.4008', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-16 04:57:16.380827 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j530717748197.4033', 'results_file': '/ansible/.ansible_async/j530717748197.4033', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-16 04:57:55.797629 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j611773652511.4058', 'results_file': '/ansible/.ansible_async/j611773652511.4058', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-16 04:57:55.797730 | orchestrator | 2026-02-16 04:57:55.797742 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-02-16 04:57:55.797752 | orchestrator | Monday 16 February 2026 04:57:16 +0000 (0:00:09.461) 0:02:43.335 ******* 2026-02-16 04:57:55.797761 | orchestrator | changed: [localhost] => (item=test) 2026-02-16 04:57:55.797770 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-16 04:57:55.797778 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-16 04:57:55.797786 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-16 04:57:55.797793 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-16 04:57:55.797801 | orchestrator | 2026-02-16 04:57:55.797828 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-02-16 04:57:55.797836 | orchestrator | Monday 16 February 2026 04:57:21 +0000 (0:00:04.893) 0:02:48.228 ******* 2026-02-16 04:57:55.797843 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-02-16 04:57:55.797853 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j724971127626.4127', 'results_file': '/ansible/.ansible_async/j724971127626.4127', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-16 04:57:55.797861 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j187200235904.4152', 'results_file': '/ansible/.ansible_async/j187200235904.4152', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-16 04:57:55.797869 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j420930333512.4178', 'results_file': '/ansible/.ansible_async/j420930333512.4178', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-16 04:57:55.797876 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j878086079102.4204', 'results_file': '/ansible/.ansible_async/j878086079102.4204', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-16 04:57:55.797883 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j524728122231.4237', 'results_file': '/ansible/.ansible_async/j524728122231.4237', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-16 04:57:55.797891 | orchestrator | 2026-02-16 04:57:55.797898 | orchestrator | TASK [Create test volume] ****************************************************** 2026-02-16 04:57:55.797906 | orchestrator | Monday 16 February 2026 04:57:30 +0000 (0:00:09.390) 0:02:57.619 ******* 2026-02-16 04:57:55.797913 | orchestrator | changed: [localhost] 2026-02-16 04:57:55.797921 | orchestrator | 2026-02-16 04:57:55.797928 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-02-16 04:57:55.797936 | orchestrator | Monday 16 February 2026 04:57:37 +0000 (0:00:06.590) 0:03:04.209 ******* 2026-02-16 04:57:55.797943 | orchestrator | changed: [localhost] 2026-02-16 04:57:55.797951 | orchestrator | 2026-02-16 04:57:55.797958 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-02-16 04:57:55.797966 | orchestrator | Monday 16 February 2026 04:57:50 +0000 (0:00:13.104) 0:03:17.314 ******* 2026-02-16 04:57:55.797973 | orchestrator | ok: [localhost] 2026-02-16 04:57:55.797980 | orchestrator | 2026-02-16 04:57:55.797987 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-02-16 04:57:55.797994 | orchestrator | Monday 16 February 2026 04:57:55 +0000 (0:00:05.167) 0:03:22.481 ******* 2026-02-16 04:57:55.798001 | orchestrator | ok: [localhost] => { 2026-02-16 04:57:55.798008 | orchestrator |  "msg": "192.168.112.120" 2026-02-16 04:57:55.798065 | orchestrator | } 2026-02-16 04:57:55.798074 | orchestrator | 2026-02-16 04:57:55.798081 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 04:57:55.798091 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-16 04:57:55.798100 | orchestrator | 2026-02-16 04:57:55.798107 | orchestrator | 2026-02-16 04:57:55.798115 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 04:57:55.798123 | orchestrator | Monday 16 February 2026 04:57:55 +0000 (0:00:00.037) 0:03:22.519 ******* 2026-02-16 04:57:55.798130 | orchestrator | =============================================================================== 2026-02-16 04:57:55.798138 | orchestrator | Wait for instance creation to complete --------------------------------- 57.38s 2026-02-16 04:57:55.798145 | orchestrator | Attach test volume ----------------------------------------------------- 13.10s 2026-02-16 04:57:55.798153 | orchestrator | Create test router ----------------------------------------------------- 11.66s 2026-02-16 04:57:55.798182 | orchestrator | Add member roles to user test ------------------------------------------ 11.65s 2026-02-16 04:57:55.798192 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.46s 2026-02-16 04:57:55.798200 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.39s 2026-02-16 04:57:55.798209 | orchestrator | Create test volume ------------------------------------------------------ 6.59s 2026-02-16 04:57:55.798235 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.53s 2026-02-16 04:57:55.798244 | orchestrator | Create test subnet ------------------------------------------------------ 5.41s 2026-02-16 04:57:55.798253 | orchestrator | Create ssh security group ----------------------------------------------- 5.28s 2026-02-16 04:57:55.798261 | orchestrator | Create floating ip address ---------------------------------------------- 5.17s 2026-02-16 04:57:55.798270 | orchestrator | Create test server group ------------------------------------------------ 4.97s 2026-02-16 04:57:55.798279 | orchestrator | Add tag to instances ---------------------------------------------------- 4.89s 2026-02-16 04:57:55.798287 | orchestrator | Create test instances --------------------------------------------------- 4.85s 2026-02-16 04:57:55.798295 | orchestrator | Create test network ----------------------------------------------------- 4.78s 2026-02-16 04:57:55.798303 | orchestrator | Add metadata to instances ----------------------------------------------- 4.47s 2026-02-16 04:57:55.798323 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.33s 2026-02-16 04:57:55.798332 | orchestrator | Create test-admin user -------------------------------------------------- 4.24s 2026-02-16 04:57:55.798340 | orchestrator | Create test user -------------------------------------------------------- 4.18s 2026-02-16 04:57:55.798356 | orchestrator | Get test server group --------------------------------------------------- 4.15s 2026-02-16 04:57:56.148435 | orchestrator | + server_list 2026-02-16 04:57:56.148512 | orchestrator | + openstack --os-cloud test server list 2026-02-16 04:57:59.956891 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-16 04:57:59.956977 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-02-16 04:57:59.956988 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-16 04:57:59.956995 | orchestrator | | def6eca1-01b1-445b-a85c-3f0b4ceed1e7 | test-4 | ACTIVE | test=192.168.112.170, 192.168.200.225 | N/A (booted from volume) | SCS-1L-1 | 2026-02-16 04:57:59.957002 | orchestrator | | d59760a6-7812-4c63-ae59-83f6fd640a6b | test-2 | ACTIVE | test=192.168.112.137, 192.168.200.15 | N/A (booted from volume) | SCS-1L-1 | 2026-02-16 04:57:59.957008 | orchestrator | | ec16ba9c-ff0e-4ef1-86d0-0366080c2fa4 | test-3 | ACTIVE | test=192.168.112.153, 192.168.200.132 | N/A (booted from volume) | SCS-1L-1 | 2026-02-16 04:57:59.957015 | orchestrator | | 5a113ef2-8839-4475-a421-254ea7536807 | test-1 | ACTIVE | test=192.168.112.133, 192.168.200.206 | N/A (booted from volume) | SCS-1L-1 | 2026-02-16 04:57:59.957021 | orchestrator | | 4e914e32-966a-48a0-a50e-fbdf4a425ed4 | test | ACTIVE | test=192.168.112.120, 192.168.200.246 | N/A (booted from volume) | SCS-1L-1 | 2026-02-16 04:57:59.957028 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-16 04:58:00.249594 | orchestrator | + openstack --os-cloud test server show test 2026-02-16 04:58:03.730690 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-16 04:58:03.730842 | orchestrator | | Field | Value | 2026-02-16 04:58:03.730892 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-16 04:58:03.730913 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-16 04:58:03.730926 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-16 04:58:03.730966 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-16 04:58:03.730979 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-02-16 04:58:03.730990 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-16 04:58:03.731002 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-16 04:58:03.731036 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-16 04:58:03.731048 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-16 04:58:03.731069 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-16 04:58:03.731081 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-16 04:58:03.731098 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-16 04:58:03.731110 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-16 04:58:03.731121 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-16 04:58:03.731133 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-16 04:58:03.731144 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-16 04:58:03.731155 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-16T04:56:36.000000 | 2026-02-16 04:58:03.731175 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-16 04:58:03.731201 | orchestrator | | accessIPv4 | | 2026-02-16 04:58:03.731214 | orchestrator | | accessIPv6 | | 2026-02-16 04:58:03.731226 | orchestrator | | addresses | test=192.168.112.120, 192.168.200.246 | 2026-02-16 04:58:03.731244 | orchestrator | | config_drive | | 2026-02-16 04:58:03.731266 | orchestrator | | created | 2026-02-16T04:56:09Z | 2026-02-16 04:58:03.731287 | orchestrator | | description | None | 2026-02-16 04:58:03.731304 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-16 04:58:03.731322 | orchestrator | | hostId | 4e6833e7eee613d103dcf5f2ba32e7293ef46ba69e2b7949cba12147 | 2026-02-16 04:58:03.731342 | orchestrator | | host_status | None | 2026-02-16 04:58:03.731384 | orchestrator | | id | 4e914e32-966a-48a0-a50e-fbdf4a425ed4 | 2026-02-16 04:58:03.731405 | orchestrator | | image | N/A (booted from volume) | 2026-02-16 04:58:03.731422 | orchestrator | | key_name | test | 2026-02-16 04:58:03.731442 | orchestrator | | locked | False | 2026-02-16 04:58:03.731461 | orchestrator | | locked_reason | None | 2026-02-16 04:58:03.731480 | orchestrator | | name | test | 2026-02-16 04:58:03.731496 | orchestrator | | pinned_availability_zone | None | 2026-02-16 04:58:03.731514 | orchestrator | | progress | 0 | 2026-02-16 04:58:03.731531 | orchestrator | | project_id | e3f3367780ec4fc4b32b94fea8ba2f38 | 2026-02-16 04:58:03.731579 | orchestrator | | properties | hostname='test' | 2026-02-16 04:58:03.731636 | orchestrator | | security_groups | name='icmp' | 2026-02-16 04:58:03.731658 | orchestrator | | | name='ssh' | 2026-02-16 04:58:03.731676 | orchestrator | | server_groups | None | 2026-02-16 04:58:03.731694 | orchestrator | | status | ACTIVE | 2026-02-16 04:58:03.731730 | orchestrator | | tags | test | 2026-02-16 04:58:03.731749 | orchestrator | | trusted_image_certificates | None | 2026-02-16 04:58:03.731767 | orchestrator | | updated | 2026-02-16T04:57:08Z | 2026-02-16 04:58:03.731787 | orchestrator | | user_id | 0c9b2c3a131649a789c5a13b89a1d655 | 2026-02-16 04:58:03.731807 | orchestrator | | volumes_attached | delete_on_termination='True', id='65c4818e-87e2-45b0-9a5b-706bdac55ef3' | 2026-02-16 04:58:03.731837 | orchestrator | | | delete_on_termination='False', id='2c251860-480d-44b6-a189-143760630382' | 2026-02-16 04:58:03.734729 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-16 04:58:04.019178 | orchestrator | + openstack --os-cloud test server show test-1 2026-02-16 04:58:07.249192 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-16 04:58:07.249303 | orchestrator | | Field | Value | 2026-02-16 04:58:07.249331 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-16 04:58:07.249346 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-16 04:58:07.249359 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-16 04:58:07.249372 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-16 04:58:07.249384 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-02-16 04:58:07.249419 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-16 04:58:07.249435 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-16 04:58:07.249467 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-16 04:58:07.249483 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-16 04:58:07.249496 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-16 04:58:07.249515 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-16 04:58:07.249529 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-16 04:58:07.249542 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-16 04:58:07.249582 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-16 04:58:07.249599 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-16 04:58:07.249607 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-16 04:58:07.249615 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-16T04:56:36.000000 | 2026-02-16 04:58:07.249630 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-16 04:58:07.249639 | orchestrator | | accessIPv4 | | 2026-02-16 04:58:07.249647 | orchestrator | | accessIPv6 | | 2026-02-16 04:58:07.249659 | orchestrator | | addresses | test=192.168.112.133, 192.168.200.206 | 2026-02-16 04:58:07.249668 | orchestrator | | config_drive | | 2026-02-16 04:58:07.249676 | orchestrator | | created | 2026-02-16T04:56:10Z | 2026-02-16 04:58:07.249689 | orchestrator | | description | None | 2026-02-16 04:58:07.249697 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-16 04:58:07.249706 | orchestrator | | hostId | 4e6833e7eee613d103dcf5f2ba32e7293ef46ba69e2b7949cba12147 | 2026-02-16 04:58:07.249716 | orchestrator | | host_status | None | 2026-02-16 04:58:07.249730 | orchestrator | | id | 5a113ef2-8839-4475-a421-254ea7536807 | 2026-02-16 04:58:07.249740 | orchestrator | | image | N/A (booted from volume) | 2026-02-16 04:58:07.249750 | orchestrator | | key_name | test | 2026-02-16 04:58:07.249763 | orchestrator | | locked | False | 2026-02-16 04:58:07.249773 | orchestrator | | locked_reason | None | 2026-02-16 04:58:07.249783 | orchestrator | | name | test-1 | 2026-02-16 04:58:07.249797 | orchestrator | | pinned_availability_zone | None | 2026-02-16 04:58:07.249807 | orchestrator | | progress | 0 | 2026-02-16 04:58:07.249816 | orchestrator | | project_id | e3f3367780ec4fc4b32b94fea8ba2f38 | 2026-02-16 04:58:07.249825 | orchestrator | | properties | hostname='test-1' | 2026-02-16 04:58:07.249841 | orchestrator | | security_groups | name='icmp' | 2026-02-16 04:58:07.249850 | orchestrator | | | name='ssh' | 2026-02-16 04:58:07.249860 | orchestrator | | server_groups | None | 2026-02-16 04:58:07.249870 | orchestrator | | status | ACTIVE | 2026-02-16 04:58:07.249880 | orchestrator | | tags | test | 2026-02-16 04:58:07.249894 | orchestrator | | trusted_image_certificates | None | 2026-02-16 04:58:07.249904 | orchestrator | | updated | 2026-02-16T04:57:09Z | 2026-02-16 04:58:07.249914 | orchestrator | | user_id | 0c9b2c3a131649a789c5a13b89a1d655 | 2026-02-16 04:58:07.249924 | orchestrator | | volumes_attached | delete_on_termination='True', id='aa229309-50b4-4656-a742-e061333f5128' | 2026-02-16 04:58:07.255048 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-16 04:58:07.509522 | orchestrator | + openstack --os-cloud test server show test-2 2026-02-16 04:58:10.471707 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-16 04:58:10.471802 | orchestrator | | Field | Value | 2026-02-16 04:58:10.471833 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-16 04:58:10.471847 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-16 04:58:10.471879 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-16 04:58:10.471889 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-16 04:58:10.471898 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-02-16 04:58:10.471907 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-16 04:58:10.471916 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-16 04:58:10.471942 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-16 04:58:10.471952 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-16 04:58:10.471961 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-16 04:58:10.471969 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-16 04:58:10.471989 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-16 04:58:10.471998 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-16 04:58:10.472007 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-16 04:58:10.472016 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-16 04:58:10.472025 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-16 04:58:10.472034 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-16T04:56:38.000000 | 2026-02-16 04:58:10.472049 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-16 04:58:10.472059 | orchestrator | | accessIPv4 | | 2026-02-16 04:58:10.472068 | orchestrator | | accessIPv6 | | 2026-02-16 04:58:10.472080 | orchestrator | | addresses | test=192.168.112.137, 192.168.200.15 | 2026-02-16 04:58:10.472095 | orchestrator | | config_drive | | 2026-02-16 04:58:10.472104 | orchestrator | | created | 2026-02-16T04:56:13Z | 2026-02-16 04:58:10.472113 | orchestrator | | description | None | 2026-02-16 04:58:10.472122 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-16 04:58:10.472131 | orchestrator | | hostId | 4e6833e7eee613d103dcf5f2ba32e7293ef46ba69e2b7949cba12147 | 2026-02-16 04:58:10.472140 | orchestrator | | host_status | None | 2026-02-16 04:58:10.472160 | orchestrator | | id | d59760a6-7812-4c63-ae59-83f6fd640a6b | 2026-02-16 04:58:10.472177 | orchestrator | | image | N/A (booted from volume) | 2026-02-16 04:58:10.472193 | orchestrator | | key_name | test | 2026-02-16 04:58:10.472222 | orchestrator | | locked | False | 2026-02-16 04:58:10.472237 | orchestrator | | locked_reason | None | 2026-02-16 04:58:10.472250 | orchestrator | | name | test-2 | 2026-02-16 04:58:10.472266 | orchestrator | | pinned_availability_zone | None | 2026-02-16 04:58:10.472280 | orchestrator | | progress | 0 | 2026-02-16 04:58:10.472294 | orchestrator | | project_id | e3f3367780ec4fc4b32b94fea8ba2f38 | 2026-02-16 04:58:10.472308 | orchestrator | | properties | hostname='test-2' | 2026-02-16 04:58:10.472332 | orchestrator | | security_groups | name='icmp' | 2026-02-16 04:58:10.472349 | orchestrator | | | name='ssh' | 2026-02-16 04:58:10.472373 | orchestrator | | server_groups | None | 2026-02-16 04:58:10.472395 | orchestrator | | status | ACTIVE | 2026-02-16 04:58:10.472411 | orchestrator | | tags | test | 2026-02-16 04:58:10.472425 | orchestrator | | trusted_image_certificates | None | 2026-02-16 04:58:10.472440 | orchestrator | | updated | 2026-02-16T04:57:09Z | 2026-02-16 04:58:10.472455 | orchestrator | | user_id | 0c9b2c3a131649a789c5a13b89a1d655 | 2026-02-16 04:58:10.472470 | orchestrator | | volumes_attached | delete_on_termination='True', id='bd960a89-6ffe-458b-a013-015628552f3e' | 2026-02-16 04:58:10.477526 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-16 04:58:10.767912 | orchestrator | + openstack --os-cloud test server show test-3 2026-02-16 04:58:13.755071 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-16 04:58:13.755177 | orchestrator | | Field | Value | 2026-02-16 04:58:13.755189 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-16 04:58:13.755209 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-16 04:58:13.755217 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-16 04:58:13.755223 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-16 04:58:13.755231 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-02-16 04:58:13.755238 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-16 04:58:13.755245 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-16 04:58:13.755266 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-16 04:58:13.755279 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-16 04:58:13.755286 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-16 04:58:13.755292 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-16 04:58:13.755309 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-16 04:58:13.755316 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-16 04:58:13.755323 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-16 04:58:13.755330 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-16 04:58:13.755337 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-16 04:58:13.755344 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-16T04:56:39.000000 | 2026-02-16 04:58:13.755359 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-16 04:58:13.755377 | orchestrator | | accessIPv4 | | 2026-02-16 04:58:13.755389 | orchestrator | | accessIPv6 | | 2026-02-16 04:58:13.755400 | orchestrator | | addresses | test=192.168.112.153, 192.168.200.132 | 2026-02-16 04:58:13.755821 | orchestrator | | config_drive | | 2026-02-16 04:58:13.755849 | orchestrator | | created | 2026-02-16T04:56:13Z | 2026-02-16 04:58:13.755859 | orchestrator | | description | None | 2026-02-16 04:58:13.755868 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-16 04:58:13.755877 | orchestrator | | hostId | 4e6833e7eee613d103dcf5f2ba32e7293ef46ba69e2b7949cba12147 | 2026-02-16 04:58:13.755887 | orchestrator | | host_status | None | 2026-02-16 04:58:13.755913 | orchestrator | | id | ec16ba9c-ff0e-4ef1-86d0-0366080c2fa4 | 2026-02-16 04:58:13.755925 | orchestrator | | image | N/A (booted from volume) | 2026-02-16 04:58:13.755934 | orchestrator | | key_name | test | 2026-02-16 04:58:13.755943 | orchestrator | | locked | False | 2026-02-16 04:58:13.755952 | orchestrator | | locked_reason | None | 2026-02-16 04:58:13.755961 | orchestrator | | name | test-3 | 2026-02-16 04:58:13.755970 | orchestrator | | pinned_availability_zone | None | 2026-02-16 04:58:13.755979 | orchestrator | | progress | 0 | 2026-02-16 04:58:13.755987 | orchestrator | | project_id | e3f3367780ec4fc4b32b94fea8ba2f38 | 2026-02-16 04:58:13.756000 | orchestrator | | properties | hostname='test-3' | 2026-02-16 04:58:13.756014 | orchestrator | | security_groups | name='icmp' | 2026-02-16 04:58:13.756025 | orchestrator | | | name='ssh' | 2026-02-16 04:58:13.756033 | orchestrator | | server_groups | None | 2026-02-16 04:58:13.756040 | orchestrator | | status | ACTIVE | 2026-02-16 04:58:13.756048 | orchestrator | | tags | test | 2026-02-16 04:58:13.756055 | orchestrator | | trusted_image_certificates | None | 2026-02-16 04:58:13.756062 | orchestrator | | updated | 2026-02-16T04:57:10Z | 2026-02-16 04:58:13.756069 | orchestrator | | user_id | 0c9b2c3a131649a789c5a13b89a1d655 | 2026-02-16 04:58:13.756081 | orchestrator | | volumes_attached | delete_on_termination='True', id='95d138c0-62fc-433d-b329-e17a23a49a51' | 2026-02-16 04:58:13.759069 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-16 04:58:14.012405 | orchestrator | + openstack --os-cloud test server show test-4 2026-02-16 04:58:17.040722 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-16 04:58:17.040837 | orchestrator | | Field | Value | 2026-02-16 04:58:17.040853 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-16 04:58:17.040863 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-16 04:58:17.040874 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-16 04:58:17.040884 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-16 04:58:17.040902 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-02-16 04:58:17.040946 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-16 04:58:17.040965 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-16 04:58:17.041003 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-16 04:58:17.041023 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-16 04:58:17.041050 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-16 04:58:17.041069 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-16 04:58:17.041088 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-16 04:58:17.041106 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-16 04:58:17.041125 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-16 04:58:17.041143 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-16 04:58:17.041174 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-16 04:58:17.041194 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-16T04:56:41.000000 | 2026-02-16 04:58:17.041223 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-16 04:58:17.041249 | orchestrator | | accessIPv4 | | 2026-02-16 04:58:17.041266 | orchestrator | | accessIPv6 | | 2026-02-16 04:58:17.041278 | orchestrator | | addresses | test=192.168.112.170, 192.168.200.225 | 2026-02-16 04:58:17.041289 | orchestrator | | config_drive | | 2026-02-16 04:58:17.041301 | orchestrator | | created | 2026-02-16T04:56:14Z | 2026-02-16 04:58:17.041313 | orchestrator | | description | None | 2026-02-16 04:58:17.041331 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-16 04:58:17.041343 | orchestrator | | hostId | 035442572783dbbec9092ec5a2c37c2c047612141562447cae41deea | 2026-02-16 04:58:17.041355 | orchestrator | | host_status | None | 2026-02-16 04:58:17.041375 | orchestrator | | id | def6eca1-01b1-445b-a85c-3f0b4ceed1e7 | 2026-02-16 04:58:17.041391 | orchestrator | | image | N/A (booted from volume) | 2026-02-16 04:58:17.041404 | orchestrator | | key_name | test | 2026-02-16 04:58:17.041415 | orchestrator | | locked | False | 2026-02-16 04:58:17.041427 | orchestrator | | locked_reason | None | 2026-02-16 04:58:17.041439 | orchestrator | | name | test-4 | 2026-02-16 04:58:17.041456 | orchestrator | | pinned_availability_zone | None | 2026-02-16 04:58:17.041468 | orchestrator | | progress | 0 | 2026-02-16 04:58:17.041479 | orchestrator | | project_id | e3f3367780ec4fc4b32b94fea8ba2f38 | 2026-02-16 04:58:17.041491 | orchestrator | | properties | hostname='test-4' | 2026-02-16 04:58:17.041510 | orchestrator | | security_groups | name='icmp' | 2026-02-16 04:58:17.041527 | orchestrator | | | name='ssh' | 2026-02-16 04:58:17.041540 | orchestrator | | server_groups | None | 2026-02-16 04:58:17.041551 | orchestrator | | status | ACTIVE | 2026-02-16 04:58:17.041563 | orchestrator | | tags | test | 2026-02-16 04:58:17.041607 | orchestrator | | trusted_image_certificates | None | 2026-02-16 04:58:17.041620 | orchestrator | | updated | 2026-02-16T04:57:11Z | 2026-02-16 04:58:17.041630 | orchestrator | | user_id | 0c9b2c3a131649a789c5a13b89a1d655 | 2026-02-16 04:58:17.041640 | orchestrator | | volumes_attached | delete_on_termination='True', id='b1d0656e-42ca-4f1b-9963-b4c4c497347a' | 2026-02-16 04:58:17.044432 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-16 04:58:17.313048 | orchestrator | + server_ping 2026-02-16 04:58:17.314211 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-02-16 04:58:17.314629 | orchestrator | ++ tr -d '\r' 2026-02-16 04:58:20.235541 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-16 04:58:20.235662 | orchestrator | + ping -c3 192.168.112.137 2026-02-16 04:58:20.254267 | orchestrator | PING 192.168.112.137 (192.168.112.137) 56(84) bytes of data. 2026-02-16 04:58:20.254358 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=1 ttl=63 time=7.70 ms 2026-02-16 04:58:21.251913 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=2 ttl=63 time=3.11 ms 2026-02-16 04:58:22.251938 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=3 ttl=63 time=2.16 ms 2026-02-16 04:58:22.252034 | orchestrator | 2026-02-16 04:58:22.252046 | orchestrator | --- 192.168.112.137 ping statistics --- 2026-02-16 04:58:22.252056 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-16 04:58:22.252064 | orchestrator | rtt min/avg/max/mdev = 2.162/4.324/7.698/2.416 ms 2026-02-16 04:58:22.252479 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-16 04:58:22.252501 | orchestrator | + ping -c3 192.168.112.120 2026-02-16 04:58:22.261915 | orchestrator | PING 192.168.112.120 (192.168.112.120) 56(84) bytes of data. 2026-02-16 04:58:22.262053 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=1 ttl=63 time=5.23 ms 2026-02-16 04:58:23.260779 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=2 ttl=63 time=2.58 ms 2026-02-16 04:58:24.262785 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=3 ttl=63 time=1.80 ms 2026-02-16 04:58:24.262889 | orchestrator | 2026-02-16 04:58:24.262906 | orchestrator | --- 192.168.112.120 ping statistics --- 2026-02-16 04:58:24.262920 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-16 04:58:24.262963 | orchestrator | rtt min/avg/max/mdev = 1.796/3.202/5.233/1.470 ms 2026-02-16 04:58:24.262976 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-16 04:58:24.262988 | orchestrator | + ping -c3 192.168.112.170 2026-02-16 04:58:24.278683 | orchestrator | PING 192.168.112.170 (192.168.112.170) 56(84) bytes of data. 2026-02-16 04:58:24.278788 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=1 ttl=63 time=11.0 ms 2026-02-16 04:58:25.270682 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=2 ttl=63 time=2.58 ms 2026-02-16 04:58:26.272184 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=3 ttl=63 time=2.32 ms 2026-02-16 04:58:26.272267 | orchestrator | 2026-02-16 04:58:26.272274 | orchestrator | --- 192.168.112.170 ping statistics --- 2026-02-16 04:58:26.272280 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-02-16 04:58:26.272342 | orchestrator | rtt min/avg/max/mdev = 2.315/5.310/11.041/4.053 ms 2026-02-16 04:58:26.272547 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-16 04:58:26.272575 | orchestrator | + ping -c3 192.168.112.133 2026-02-16 04:58:26.286598 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-02-16 04:58:26.286695 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=9.28 ms 2026-02-16 04:58:27.281515 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.44 ms 2026-02-16 04:58:28.283672 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=2.46 ms 2026-02-16 04:58:28.283792 | orchestrator | 2026-02-16 04:58:28.283816 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-02-16 04:58:28.283833 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-16 04:58:28.283864 | orchestrator | rtt min/avg/max/mdev = 2.443/4.725/9.277/3.218 ms 2026-02-16 04:58:28.284235 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-16 04:58:28.284274 | orchestrator | + ping -c3 192.168.112.153 2026-02-16 04:58:28.295466 | orchestrator | PING 192.168.112.153 (192.168.112.153) 56(84) bytes of data. 2026-02-16 04:58:28.295540 | orchestrator | 64 bytes from 192.168.112.153: icmp_seq=1 ttl=63 time=7.55 ms 2026-02-16 04:58:29.292332 | orchestrator | 64 bytes from 192.168.112.153: icmp_seq=2 ttl=63 time=2.69 ms 2026-02-16 04:58:30.293927 | orchestrator | 64 bytes from 192.168.112.153: icmp_seq=3 ttl=63 time=1.85 ms 2026-02-16 04:58:30.294081 | orchestrator | 2026-02-16 04:58:30.294090 | orchestrator | --- 192.168.112.153 ping statistics --- 2026-02-16 04:58:30.294097 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-16 04:58:30.294103 | orchestrator | rtt min/avg/max/mdev = 1.845/4.028/7.548/2.512 ms 2026-02-16 04:58:30.294116 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-16 04:58:30.393447 | orchestrator | ok: Runtime: 0:09:34.116216 2026-02-16 04:58:30.450348 | 2026-02-16 04:58:30.450545 | TASK [Run tempest] 2026-02-16 04:58:30.989335 | orchestrator | skipping: Conditional result was False 2026-02-16 04:58:31.006825 | 2026-02-16 04:58:31.007016 | TASK [Check prometheus alert status] 2026-02-16 04:58:31.545169 | orchestrator | skipping: Conditional result was False 2026-02-16 04:58:31.559111 | 2026-02-16 04:58:31.559259 | PLAY [Upgrade testbed] 2026-02-16 04:58:31.570921 | 2026-02-16 04:58:31.571046 | TASK [Print next ceph version] 2026-02-16 04:58:31.645876 | orchestrator | ok 2026-02-16 04:58:31.656550 | 2026-02-16 04:58:31.656671 | TASK [Print next openstack version] 2026-02-16 04:58:31.720781 | orchestrator | ok 2026-02-16 04:58:31.730051 | 2026-02-16 04:58:31.730172 | TASK [Print next manager version] 2026-02-16 04:58:31.799170 | orchestrator | ok 2026-02-16 04:58:31.809625 | 2026-02-16 04:58:31.809757 | TASK [Set cloud fact (Zuul deployment)] 2026-02-16 04:58:31.881965 | orchestrator | ok 2026-02-16 04:58:31.891094 | 2026-02-16 04:58:31.891235 | TASK [Set cloud fact (local deployment)] 2026-02-16 04:58:31.926634 | orchestrator | skipping: Conditional result was False 2026-02-16 04:58:31.936346 | 2026-02-16 04:58:31.936467 | TASK [Fetch manager address] 2026-02-16 04:58:32.227520 | orchestrator | ok 2026-02-16 04:58:32.237089 | 2026-02-16 04:58:32.237222 | TASK [Set manager_host address] 2026-02-16 04:58:32.313534 | orchestrator | ok 2026-02-16 04:58:32.322058 | 2026-02-16 04:58:32.322179 | TASK [Run upgrade] 2026-02-16 04:58:33.096639 | orchestrator | + set -e 2026-02-16 04:58:33.096774 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-16 04:58:33.096785 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-16 04:58:33.096796 | orchestrator | + CEPH_VERSION=reef 2026-02-16 04:58:33.096801 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-16 04:58:33.096806 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-16 04:58:33.096816 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-02-16 04:58:33.105318 | orchestrator | + set -e 2026-02-16 04:58:33.105378 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-16 04:58:33.105388 | orchestrator | ++ export INTERACTIVE=false 2026-02-16 04:58:33.105400 | orchestrator | ++ INTERACTIVE=false 2026-02-16 04:58:33.105407 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-16 04:58:33.105419 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-16 04:58:33.106196 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-02-16 04:58:33.144471 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-02-16 04:58:33.145496 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-16 04:58:33.179103 | orchestrator | 2026-02-16 04:58:33.179212 | orchestrator | # UPGRADE MANAGER 2026-02-16 04:58:33.179234 | orchestrator | 2026-02-16 04:58:33.179250 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-02-16 04:58:33.179261 | orchestrator | + echo 2026-02-16 04:58:33.179271 | orchestrator | + echo '# UPGRADE MANAGER' 2026-02-16 04:58:33.179281 | orchestrator | + echo 2026-02-16 04:58:33.179289 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-16 04:58:33.179298 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-16 04:58:33.179306 | orchestrator | + CEPH_VERSION=reef 2026-02-16 04:58:33.179314 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-16 04:58:33.179322 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-16 04:58:33.179331 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-02-16 04:58:33.183536 | orchestrator | + set -e 2026-02-16 04:58:33.183599 | orchestrator | + VERSION=10.0.0-rc.1 2026-02-16 04:58:33.183655 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-02-16 04:58:33.189341 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-02-16 04:58:33.189369 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-16 04:58:33.196139 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-16 04:58:33.200762 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-16 04:58:33.209531 | orchestrator | + set -e 2026-02-16 04:58:33.209636 | orchestrator | /opt/configuration ~ 2026-02-16 04:58:33.209652 | orchestrator | + pushd /opt/configuration 2026-02-16 04:58:33.209660 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-16 04:58:33.209671 | orchestrator | + source /opt/venv/bin/activate 2026-02-16 04:58:33.211165 | orchestrator | ++ deactivate nondestructive 2026-02-16 04:58:33.212845 | orchestrator | ++ '[' -n '' ']' 2026-02-16 04:58:33.212874 | orchestrator | ++ '[' -n '' ']' 2026-02-16 04:58:33.212881 | orchestrator | ++ hash -r 2026-02-16 04:58:33.212887 | orchestrator | ++ '[' -n '' ']' 2026-02-16 04:58:33.212894 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-16 04:58:33.212901 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-16 04:58:33.212908 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-16 04:58:33.212917 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-16 04:58:33.212923 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-16 04:58:33.212929 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-16 04:58:33.212936 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-16 04:58:33.212943 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-16 04:58:33.212950 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-16 04:58:33.212957 | orchestrator | ++ export PATH 2026-02-16 04:58:33.212963 | orchestrator | ++ '[' -n '' ']' 2026-02-16 04:58:33.212967 | orchestrator | ++ '[' -z '' ']' 2026-02-16 04:58:33.212971 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-16 04:58:33.212975 | orchestrator | ++ PS1='(venv) ' 2026-02-16 04:58:33.212979 | orchestrator | ++ export PS1 2026-02-16 04:58:33.212982 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-16 04:58:33.212986 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-16 04:58:33.212990 | orchestrator | ++ hash -r 2026-02-16 04:58:33.212996 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-16 04:58:34.357988 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-16 04:58:34.359385 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-16 04:58:34.360919 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-16 04:58:34.362295 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-16 04:58:34.363811 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-16 04:58:34.374084 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-16 04:58:34.375516 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-16 04:58:34.376839 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-16 04:58:34.378261 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-16 04:58:34.410165 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-16 04:58:34.411571 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-16 04:58:34.413652 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-16 04:58:34.414963 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-16 04:58:34.419133 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-16 04:58:34.637005 | orchestrator | ++ which gilt 2026-02-16 04:58:34.637575 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-16 04:58:34.637599 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-16 04:58:34.892498 | orchestrator | osism.cfg-generics: 2026-02-16 04:58:34.997035 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-16 04:58:34.998124 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-16 04:58:34.999392 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-16 04:58:34.999505 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-16 04:58:35.956846 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-16 04:58:35.970210 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-16 04:58:36.426489 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-16 04:58:36.486341 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-16 04:58:36.486430 | orchestrator | + deactivate 2026-02-16 04:58:36.486443 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-16 04:58:36.486454 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-16 04:58:36.486468 | orchestrator | + export PATH 2026-02-16 04:58:36.486478 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-16 04:58:36.486487 | orchestrator | + '[' -n '' ']' 2026-02-16 04:58:36.486495 | orchestrator | + hash -r 2026-02-16 04:58:36.486503 | orchestrator | + '[' -n '' ']' 2026-02-16 04:58:36.486512 | orchestrator | + unset VIRTUAL_ENV 2026-02-16 04:58:36.486520 | orchestrator | ~ 2026-02-16 04:58:36.486528 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-16 04:58:36.486536 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-16 04:58:36.486544 | orchestrator | + unset -f deactivate 2026-02-16 04:58:36.486553 | orchestrator | + popd 2026-02-16 04:58:36.488174 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-02-16 04:58:36.488319 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-16 04:58:36.494225 | orchestrator | + set -e 2026-02-16 04:58:36.494341 | orchestrator | + NAMESPACE=kolla/release 2026-02-16 04:58:36.494368 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-16 04:58:36.500869 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-16 04:58:36.508838 | orchestrator | /opt/configuration ~ 2026-02-16 04:58:36.508909 | orchestrator | + set -e 2026-02-16 04:58:36.508922 | orchestrator | + pushd /opt/configuration 2026-02-16 04:58:36.508933 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-16 04:58:36.508944 | orchestrator | + source /opt/venv/bin/activate 2026-02-16 04:58:36.508955 | orchestrator | ++ deactivate nondestructive 2026-02-16 04:58:36.508966 | orchestrator | ++ '[' -n '' ']' 2026-02-16 04:58:36.508977 | orchestrator | ++ '[' -n '' ']' 2026-02-16 04:58:36.508987 | orchestrator | ++ hash -r 2026-02-16 04:58:36.508998 | orchestrator | ++ '[' -n '' ']' 2026-02-16 04:58:36.509005 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-16 04:58:36.509011 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-16 04:58:36.509017 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-16 04:58:36.509024 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-16 04:58:36.509030 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-16 04:58:36.509037 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-16 04:58:36.509047 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-16 04:58:36.509054 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-16 04:58:36.509063 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-16 04:58:36.509069 | orchestrator | ++ export PATH 2026-02-16 04:58:36.509076 | orchestrator | ++ '[' -n '' ']' 2026-02-16 04:58:36.509082 | orchestrator | ++ '[' -z '' ']' 2026-02-16 04:58:36.509088 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-16 04:58:36.509094 | orchestrator | ++ PS1='(venv) ' 2026-02-16 04:58:36.509100 | orchestrator | ++ export PS1 2026-02-16 04:58:36.509106 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-16 04:58:36.509112 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-16 04:58:36.509185 | orchestrator | ++ hash -r 2026-02-16 04:58:36.509195 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-16 04:58:37.016714 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-16 04:58:37.018007 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-16 04:58:37.019397 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-16 04:58:37.020894 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-16 04:58:37.022217 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-16 04:58:37.040078 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-16 04:58:37.042787 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-16 04:58:37.044360 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-16 04:58:37.046708 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-16 04:58:37.082857 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-16 04:58:37.084495 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-16 04:58:37.086249 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-16 04:58:37.087612 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-16 04:58:37.091690 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-16 04:58:37.316897 | orchestrator | ++ which gilt 2026-02-16 04:58:37.318812 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-16 04:58:37.318871 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-16 04:58:37.493846 | orchestrator | osism.cfg-generics: 2026-02-16 04:58:37.556012 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-16 04:58:37.556099 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-16 04:58:37.556208 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-16 04:58:37.556293 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-16 04:58:38.041177 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-16 04:58:38.050462 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-16 04:58:38.396010 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-16 04:58:38.453958 | orchestrator | ~ 2026-02-16 04:58:38.454084 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-16 04:58:38.454100 | orchestrator | + deactivate 2026-02-16 04:58:38.454131 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-16 04:58:38.454143 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-16 04:58:38.454153 | orchestrator | + export PATH 2026-02-16 04:58:38.454161 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-16 04:58:38.454171 | orchestrator | + '[' -n '' ']' 2026-02-16 04:58:38.454180 | orchestrator | + hash -r 2026-02-16 04:58:38.454189 | orchestrator | + '[' -n '' ']' 2026-02-16 04:58:38.454199 | orchestrator | + unset VIRTUAL_ENV 2026-02-16 04:58:38.454208 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-16 04:58:38.454217 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-16 04:58:38.454226 | orchestrator | + unset -f deactivate 2026-02-16 04:58:38.454235 | orchestrator | + popd 2026-02-16 04:58:38.456381 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-02-16 04:58:38.509991 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-16 04:58:38.510106 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-16 04:58:38.607929 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-16 04:58:38.607996 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-16 04:58:38.612681 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-16 04:58:38.618008 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-02-16 04:58:38.681081 | orchestrator | ++ '[' -1 -le 0 ']' 2026-02-16 04:58:38.682164 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-02-16 04:58:38.782901 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-02-16 04:58:38.782962 | orchestrator | ++ echo true 2026-02-16 04:58:38.783075 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-02-16 04:58:38.785622 | orchestrator | +++ semver 2024.2 2024.2 2026-02-16 04:58:38.871211 | orchestrator | ++ '[' 0 -le 0 ']' 2026-02-16 04:58:38.872034 | orchestrator | +++ semver 2024.2 2025.1 2026-02-16 04:58:38.926331 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-02-16 04:58:38.926402 | orchestrator | ++ echo false 2026-02-16 04:58:38.926661 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-02-16 04:58:38.926843 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-16 04:58:38.926860 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-02-16 04:58:38.927012 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-02-16 04:58:38.927025 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-02-16 04:58:38.932738 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-02-16 04:58:38.933342 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-02-16 04:58:38.953466 | orchestrator | export RABBITMQ3TO4=true 2026-02-16 04:58:38.956913 | orchestrator | + osism update manager 2026-02-16 04:58:44.574602 | orchestrator | Collecting uv 2026-02-16 04:58:44.688079 | orchestrator | Downloading uv-0.10.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-02-16 04:58:44.710950 | orchestrator | Downloading uv-0.10.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23.0 MB) 2026-02-16 04:58:45.545098 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 23.0/23.0 MB 30.9 MB/s eta 0:00:00 2026-02-16 04:58:45.593853 | orchestrator | Installing collected packages: uv 2026-02-16 04:58:46.029050 | orchestrator | Successfully installed uv-0.10.2 2026-02-16 04:58:46.698421 | orchestrator | Resolved 11 packages in 357ms 2026-02-16 04:58:46.732828 | orchestrator | Downloading cryptography (4.3MiB) 2026-02-16 04:58:46.732921 | orchestrator | Downloading ansible-core (2.1MiB) 2026-02-16 04:58:46.732931 | orchestrator | Downloading netaddr (2.2MiB) 2026-02-16 04:58:46.732976 | orchestrator | Downloading ansible (54.5MiB) 2026-02-16 04:58:47.062795 | orchestrator | Downloaded netaddr 2026-02-16 04:58:47.186226 | orchestrator | Downloaded cryptography 2026-02-16 04:58:47.247174 | orchestrator | Downloaded ansible-core 2026-02-16 04:58:53.924223 | orchestrator | Downloaded ansible 2026-02-16 04:58:53.924549 | orchestrator | Prepared 11 packages in 7.22s 2026-02-16 04:58:54.381603 | orchestrator | Installed 11 packages in 455ms 2026-02-16 04:58:54.381766 | orchestrator | + ansible==11.11.0 2026-02-16 04:58:54.381784 | orchestrator | + ansible-core==2.18.13 2026-02-16 04:58:54.381796 | orchestrator | + cffi==2.0.0 2026-02-16 04:58:54.381808 | orchestrator | + cryptography==46.0.5 2026-02-16 04:58:54.381820 | orchestrator | + jinja2==3.1.6 2026-02-16 04:58:54.381831 | orchestrator | + markupsafe==3.0.3 2026-02-16 04:58:54.381841 | orchestrator | + netaddr==1.3.0 2026-02-16 04:58:54.381852 | orchestrator | + packaging==26.0 2026-02-16 04:58:54.381863 | orchestrator | + pycparser==3.0 2026-02-16 04:58:54.382199 | orchestrator | + pyyaml==6.0.3 2026-02-16 04:58:54.382226 | orchestrator | + resolvelib==1.0.1 2026-02-16 04:58:55.647714 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-2018727762cb1g/tmpt4ba9d2k/ansible-collection-serviceszyyea9u6'... 2026-02-16 04:58:57.017279 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-16 04:58:57.017372 | orchestrator | Already on 'main' 2026-02-16 04:58:57.482001 | orchestrator | Starting galaxy collection install process 2026-02-16 04:58:57.482111 | orchestrator | Process install dependency map 2026-02-16 04:58:57.482120 | orchestrator | Starting collection install process 2026-02-16 04:58:57.482127 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-02-16 04:58:57.482133 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-02-16 04:58:57.482139 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-16 04:58:58.002799 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-2018992ti3z7qv/tmpf4zlut0b/ansible-playbooks-managerqwquce3o'... 2026-02-16 04:58:58.600045 | orchestrator | Already on 'main' 2026-02-16 04:58:58.600132 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-16 04:58:58.864346 | orchestrator | Starting galaxy collection install process 2026-02-16 04:58:58.864416 | orchestrator | Process install dependency map 2026-02-16 04:58:58.864423 | orchestrator | Starting collection install process 2026-02-16 04:58:58.864429 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-02-16 04:58:58.864435 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-02-16 04:58:58.864440 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-02-16 04:58:59.506352 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-02-16 04:58:59.605610 | orchestrator | -vvvv to see details 2026-02-16 04:58:59.897611 | orchestrator | 2026-02-16 04:58:59.897746 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-02-16 04:58:59.897763 | orchestrator | 2026-02-16 04:58:59.897775 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-16 04:59:03.953084 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:03.953189 | orchestrator | 2026-02-16 04:59:03.953207 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-16 04:59:04.018272 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-16 04:59:04.018367 | orchestrator | 2026-02-16 04:59:04.018407 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-16 04:59:05.758272 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:05.758364 | orchestrator | 2026-02-16 04:59:05.758379 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-16 04:59:05.821347 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:05.821440 | orchestrator | 2026-02-16 04:59:05.821455 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-16 04:59:05.883959 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-16 04:59:05.884052 | orchestrator | 2026-02-16 04:59:05.884066 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-16 04:59:10.124907 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-02-16 04:59:10.125030 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-02-16 04:59:10.125046 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-16 04:59:10.125070 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-02-16 04:59:10.125082 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-16 04:59:10.125133 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-16 04:59:10.125146 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-16 04:59:10.125157 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-02-16 04:59:10.125168 | orchestrator | 2026-02-16 04:59:10.125181 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-16 04:59:11.162355 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:11.162470 | orchestrator | 2026-02-16 04:59:11.162494 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-16 04:59:12.073186 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:12.073266 | orchestrator | 2026-02-16 04:59:12.073277 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-16 04:59:12.161045 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-16 04:59:12.161145 | orchestrator | 2026-02-16 04:59:12.161158 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-16 04:59:14.004965 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-02-16 04:59:14.005042 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-02-16 04:59:14.005050 | orchestrator | 2026-02-16 04:59:14.005081 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-16 04:59:14.919070 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:14.919162 | orchestrator | 2026-02-16 04:59:14.919176 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-16 04:59:14.990313 | orchestrator | skipping: [testbed-manager] 2026-02-16 04:59:14.990399 | orchestrator | 2026-02-16 04:59:14.990413 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-16 04:59:15.080508 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-16 04:59:15.080590 | orchestrator | 2026-02-16 04:59:15.080601 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-16 04:59:16.008106 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:16.008212 | orchestrator | 2026-02-16 04:59:16.008229 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-16 04:59:16.083853 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-16 04:59:16.083923 | orchestrator | 2026-02-16 04:59:16.083930 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-16 04:59:17.985919 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-16 04:59:17.986062 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-16 04:59:17.986075 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:17.986084 | orchestrator | 2026-02-16 04:59:17.986092 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-16 04:59:18.862255 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:18.862349 | orchestrator | 2026-02-16 04:59:18.862365 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-16 04:59:18.935155 | orchestrator | skipping: [testbed-manager] 2026-02-16 04:59:18.935250 | orchestrator | 2026-02-16 04:59:18.935265 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-16 04:59:19.022453 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-16 04:59:19.022551 | orchestrator | 2026-02-16 04:59:19.022568 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-16 04:59:19.708416 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:19.708516 | orchestrator | 2026-02-16 04:59:19.708533 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-16 04:59:20.279671 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:20.279783 | orchestrator | 2026-02-16 04:59:20.279794 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-16 04:59:22.060669 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-02-16 04:59:22.060825 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-02-16 04:59:22.060841 | orchestrator | 2026-02-16 04:59:22.060849 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-16 04:59:23.227599 | orchestrator | changed: [testbed-manager] 2026-02-16 04:59:23.227687 | orchestrator | 2026-02-16 04:59:23.227699 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-16 04:59:23.818273 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:23.818383 | orchestrator | 2026-02-16 04:59:23.818400 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-16 04:59:24.353945 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:24.354071 | orchestrator | 2026-02-16 04:59:24.354108 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-16 04:59:24.408100 | orchestrator | skipping: [testbed-manager] 2026-02-16 04:59:24.408193 | orchestrator | 2026-02-16 04:59:24.408205 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-16 04:59:24.487350 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-16 04:59:24.487440 | orchestrator | 2026-02-16 04:59:24.487453 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-16 04:59:24.537196 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:24.537291 | orchestrator | 2026-02-16 04:59:24.537308 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-16 04:59:27.498262 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-02-16 04:59:27.498359 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-02-16 04:59:27.498372 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-02-16 04:59:27.498381 | orchestrator | 2026-02-16 04:59:27.498392 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-16 04:59:28.575576 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:28.575701 | orchestrator | 2026-02-16 04:59:28.575718 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-16 04:59:29.614121 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:29.614222 | orchestrator | 2026-02-16 04:59:29.614239 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-16 04:59:30.587203 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:30.587303 | orchestrator | 2026-02-16 04:59:30.587320 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-16 04:59:30.672472 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-16 04:59:30.672599 | orchestrator | 2026-02-16 04:59:30.672618 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-16 04:59:30.734065 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:30.734146 | orchestrator | 2026-02-16 04:59:30.734161 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-16 04:59:31.720438 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-02-16 04:59:31.720564 | orchestrator | 2026-02-16 04:59:31.720592 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-16 04:59:31.808395 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-16 04:59:31.808486 | orchestrator | 2026-02-16 04:59:31.808501 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-16 04:59:32.776829 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:32.776937 | orchestrator | 2026-02-16 04:59:32.776955 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-16 04:59:33.865237 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:33.865306 | orchestrator | 2026-02-16 04:59:33.865313 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-16 04:59:33.940412 | orchestrator | skipping: [testbed-manager] 2026-02-16 04:59:33.940492 | orchestrator | 2026-02-16 04:59:33.940503 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-16 04:59:34.019191 | orchestrator | ok: [testbed-manager] 2026-02-16 04:59:34.019285 | orchestrator | 2026-02-16 04:59:34.019299 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-16 04:59:35.359171 | orchestrator | changed: [testbed-manager] 2026-02-16 04:59:35.359266 | orchestrator | 2026-02-16 04:59:35.359281 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-16 05:00:46.734007 | orchestrator | changed: [testbed-manager] 2026-02-16 05:00:46.734152 | orchestrator | 2026-02-16 05:00:46.734167 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-16 05:00:48.003446 | orchestrator | ok: [testbed-manager] 2026-02-16 05:00:48.003553 | orchestrator | 2026-02-16 05:00:48.003572 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-16 05:00:48.068988 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:00:48.069094 | orchestrator | 2026-02-16 05:00:48.069119 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-16 05:00:48.926218 | orchestrator | ok: [testbed-manager] 2026-02-16 05:00:48.926321 | orchestrator | 2026-02-16 05:00:48.926338 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-16 05:00:49.007010 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:00:49.007113 | orchestrator | 2026-02-16 05:00:49.007130 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-16 05:00:49.007144 | orchestrator | 2026-02-16 05:00:49.007155 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-16 05:01:03.667290 | orchestrator | changed: [testbed-manager] 2026-02-16 05:01:03.667393 | orchestrator | 2026-02-16 05:01:03.667404 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-16 05:02:03.738463 | orchestrator | Pausing for 60 seconds 2026-02-16 05:02:03.738562 | orchestrator | changed: [testbed-manager] 2026-02-16 05:02:03.738576 | orchestrator | 2026-02-16 05:02:03.738587 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-02-16 05:02:03.790424 | orchestrator | ok: [testbed-manager] 2026-02-16 05:02:03.790517 | orchestrator | 2026-02-16 05:02:03.790531 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-16 05:02:07.390439 | orchestrator | changed: [testbed-manager] 2026-02-16 05:02:07.390546 | orchestrator | 2026-02-16 05:02:07.390565 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-16 05:03:10.167515 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-16 05:03:10.167661 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-16 05:03:10.167687 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-16 05:03:10.167709 | orchestrator | changed: [testbed-manager] 2026-02-16 05:03:10.167728 | orchestrator | 2026-02-16 05:03:10.167748 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-16 05:03:21.610636 | orchestrator | changed: [testbed-manager] 2026-02-16 05:03:21.610728 | orchestrator | 2026-02-16 05:03:21.610738 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-16 05:03:21.716083 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-16 05:03:21.716267 | orchestrator | 2026-02-16 05:03:21.716297 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-16 05:03:21.716318 | orchestrator | 2026-02-16 05:03:21.716338 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-16 05:03:21.779453 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:03:21.779553 | orchestrator | 2026-02-16 05:03:21.779570 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-16 05:03:21.843932 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-16 05:03:21.844052 | orchestrator | 2026-02-16 05:03:21.844166 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-16 05:03:22.961503 | orchestrator | changed: [testbed-manager] 2026-02-16 05:03:22.961624 | orchestrator | 2026-02-16 05:03:22.961653 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-16 05:03:26.487904 | orchestrator | ok: [testbed-manager] 2026-02-16 05:03:26.488029 | orchestrator | 2026-02-16 05:03:26.488055 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-16 05:03:26.571192 | orchestrator | ok: [testbed-manager] => { 2026-02-16 05:03:26.571279 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-16 05:03:26.571291 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-16 05:03:26.571300 | orchestrator | "Checking running containers against expected versions...", 2026-02-16 05:03:26.571309 | orchestrator | "", 2026-02-16 05:03:26.571317 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-16 05:03:26.571325 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-16 05:03:26.571334 | orchestrator | " Enabled: true", 2026-02-16 05:03:26.571342 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-16 05:03:26.571351 | orchestrator | " Status: ✅ MATCH", 2026-02-16 05:03:26.571358 | orchestrator | "", 2026-02-16 05:03:26.571367 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-16 05:03:26.571375 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-16 05:03:26.571383 | orchestrator | " Enabled: true", 2026-02-16 05:03:26.571391 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-16 05:03:26.571399 | orchestrator | " Status: ✅ MATCH", 2026-02-16 05:03:26.571407 | orchestrator | "", 2026-02-16 05:03:26.571415 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-16 05:03:26.571422 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-16 05:03:26.571430 | orchestrator | " Enabled: true", 2026-02-16 05:03:26.571438 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-16 05:03:26.571446 | orchestrator | " Status: ✅ MATCH", 2026-02-16 05:03:26.571453 | orchestrator | "", 2026-02-16 05:03:26.571461 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-16 05:03:26.571469 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-16 05:03:26.571477 | orchestrator | " Enabled: true", 2026-02-16 05:03:26.571485 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-16 05:03:26.571492 | orchestrator | " Status: ✅ MATCH", 2026-02-16 05:03:26.571500 | orchestrator | "", 2026-02-16 05:03:26.571508 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-16 05:03:26.571516 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-16 05:03:26.571524 | orchestrator | " Enabled: true", 2026-02-16 05:03:26.571532 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-16 05:03:26.571539 | orchestrator | " Status: ✅ MATCH", 2026-02-16 05:03:26.571547 | orchestrator | "", 2026-02-16 05:03:26.571555 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-16 05:03:26.571584 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-16 05:03:26.571592 | orchestrator | " Enabled: true", 2026-02-16 05:03:26.571600 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-16 05:03:26.571607 | orchestrator | " Status: ✅ MATCH", 2026-02-16 05:03:26.571615 | orchestrator | "", 2026-02-16 05:03:26.571623 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-16 05:03:26.571631 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-16 05:03:26.571652 | orchestrator | " Enabled: true", 2026-02-16 05:03:26.571660 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-16 05:03:26.571676 | orchestrator | " Status: ✅ MATCH", 2026-02-16 05:03:26.571684 | orchestrator | "", 2026-02-16 05:03:26.571692 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-16 05:03:26.571700 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-16 05:03:26.571708 | orchestrator | " Enabled: true", 2026-02-16 05:03:26.571724 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-16 05:03:26.571734 | orchestrator | " Status: ✅ MATCH", 2026-02-16 05:03:26.571743 | orchestrator | "", 2026-02-16 05:03:26.571752 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-16 05:03:26.571761 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-16 05:03:26.571771 | orchestrator | " Enabled: true", 2026-02-16 05:03:26.571780 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-16 05:03:26.571789 | orchestrator | " Status: ✅ MATCH", 2026-02-16 05:03:26.571799 | orchestrator | "", 2026-02-16 05:03:26.571812 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-16 05:03:26.571821 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-16 05:03:26.571831 | orchestrator | " Enabled: true", 2026-02-16 05:03:26.571840 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-16 05:03:26.571849 | orchestrator | " Status: ✅ MATCH", 2026-02-16 05:03:26.571858 | orchestrator | "", 2026-02-16 05:03:26.571867 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-16 05:03:26.571875 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-16 05:03:26.571885 | orchestrator | " Enabled: true", 2026-02-16 05:03:26.571894 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-16 05:03:26.571903 | orchestrator | " Status: ✅ MATCH", 2026-02-16 05:03:26.571912 | orchestrator | "", 2026-02-16 05:03:26.571921 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-16 05:03:26.571930 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-16 05:03:26.571939 | orchestrator | " Enabled: true", 2026-02-16 05:03:26.571949 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-16 05:03:26.571958 | orchestrator | " Status: ✅ MATCH", 2026-02-16 05:03:26.571968 | orchestrator | "", 2026-02-16 05:03:26.571981 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-16 05:03:26.571995 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-16 05:03:26.572009 | orchestrator | " Enabled: true", 2026-02-16 05:03:26.572023 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-16 05:03:26.572038 | orchestrator | " Status: ✅ MATCH", 2026-02-16 05:03:26.572053 | orchestrator | "", 2026-02-16 05:03:26.572064 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-16 05:03:26.572073 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-16 05:03:26.572087 | orchestrator | " Enabled: true", 2026-02-16 05:03:26.572100 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-16 05:03:26.572154 | orchestrator | " Status: ✅ MATCH", 2026-02-16 05:03:26.572170 | orchestrator | "", 2026-02-16 05:03:26.572184 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-16 05:03:26.572199 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-16 05:03:26.572225 | orchestrator | " Enabled: true", 2026-02-16 05:03:26.572239 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-16 05:03:26.572255 | orchestrator | " Status: ✅ MATCH", 2026-02-16 05:03:26.572269 | orchestrator | "", 2026-02-16 05:03:26.572284 | orchestrator | "=== Summary ===", 2026-02-16 05:03:26.572299 | orchestrator | "Errors (version mismatches): 0", 2026-02-16 05:03:26.572314 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-16 05:03:26.572330 | orchestrator | "", 2026-02-16 05:03:26.572345 | orchestrator | "✅ All running containers match expected versions!" 2026-02-16 05:03:26.572359 | orchestrator | ] 2026-02-16 05:03:26.572375 | orchestrator | } 2026-02-16 05:03:26.572390 | orchestrator | 2026-02-16 05:03:26.572406 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-16 05:03:26.624981 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:03:26.625080 | orchestrator | 2026-02-16 05:03:26.625096 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 05:03:26.625191 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-02-16 05:03:26.625215 | orchestrator | 2026-02-16 05:03:39.093993 | orchestrator | 2026-02-16 05:03:39 | INFO  | Task 16fe9cd8-c167-4117-bb5b-14daa8256ec2 (sync inventory) is running in background. Output coming soon. 2026-02-16 05:04:07.936058 | orchestrator | 2026-02-16 05:03:40 | INFO  | Starting group_vars file reorganization 2026-02-16 05:04:07.936958 | orchestrator | 2026-02-16 05:03:40 | INFO  | Moved 0 file(s) to their respective directories 2026-02-16 05:04:07.936997 | orchestrator | 2026-02-16 05:03:40 | INFO  | Group_vars file reorganization completed 2026-02-16 05:04:07.937028 | orchestrator | 2026-02-16 05:03:43 | INFO  | Starting variable preparation from inventory 2026-02-16 05:04:07.937037 | orchestrator | 2026-02-16 05:03:46 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-16 05:04:07.937046 | orchestrator | 2026-02-16 05:03:46 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-16 05:04:07.937055 | orchestrator | 2026-02-16 05:03:46 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-16 05:04:07.937063 | orchestrator | 2026-02-16 05:03:46 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-16 05:04:07.937072 | orchestrator | 2026-02-16 05:03:46 | INFO  | Variable preparation completed 2026-02-16 05:04:07.937080 | orchestrator | 2026-02-16 05:03:48 | INFO  | Starting inventory overwrite handling 2026-02-16 05:04:07.937088 | orchestrator | 2026-02-16 05:03:48 | INFO  | Handling group overwrites in 99-overwrite 2026-02-16 05:04:07.937096 | orchestrator | 2026-02-16 05:03:48 | INFO  | Removing group frr:children from 60-generic 2026-02-16 05:04:07.937104 | orchestrator | 2026-02-16 05:03:48 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-16 05:04:07.937112 | orchestrator | 2026-02-16 05:03:48 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-16 05:04:07.937120 | orchestrator | 2026-02-16 05:03:48 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-16 05:04:07.937128 | orchestrator | 2026-02-16 05:03:48 | INFO  | Handling group overwrites in 20-roles 2026-02-16 05:04:07.937136 | orchestrator | 2026-02-16 05:03:48 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-16 05:04:07.937144 | orchestrator | 2026-02-16 05:03:48 | INFO  | Removed 5 group(s) in total 2026-02-16 05:04:07.937152 | orchestrator | 2026-02-16 05:03:48 | INFO  | Inventory overwrite handling completed 2026-02-16 05:04:07.937182 | orchestrator | 2026-02-16 05:03:49 | INFO  | Starting merge of inventory files 2026-02-16 05:04:07.937191 | orchestrator | 2026-02-16 05:03:49 | INFO  | Inventory files merged successfully 2026-02-16 05:04:07.937218 | orchestrator | 2026-02-16 05:03:54 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-16 05:04:07.937227 | orchestrator | 2026-02-16 05:04:06 | INFO  | Successfully wrote ClusterShell configuration 2026-02-16 05:04:08.279721 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-16 05:04:08.279873 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-16 05:04:08.279900 | orchestrator | + local max_attempts=60 2026-02-16 05:04:08.279923 | orchestrator | + local name=kolla-ansible 2026-02-16 05:04:08.279943 | orchestrator | + local attempt_num=1 2026-02-16 05:04:08.280059 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-16 05:04:08.315333 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-16 05:04:08.315430 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-16 05:04:08.315445 | orchestrator | + local max_attempts=60 2026-02-16 05:04:08.315459 | orchestrator | + local name=osism-ansible 2026-02-16 05:04:08.315470 | orchestrator | + local attempt_num=1 2026-02-16 05:04:08.315825 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-16 05:04:08.352698 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-16 05:04:08.352768 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-16 05:04:08.525940 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-16 05:04:08.526075 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-16 05:04:08.526094 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-16 05:04:08.526106 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-16 05:04:08.526122 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-02-16 05:04:08.526133 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-02-16 05:04:08.526144 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-02-16 05:04:08.526155 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-02-16 05:04:08.526214 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 21 seconds ago 2026-02-16 05:04:08.526225 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-02-16 05:04:08.526236 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-02-16 05:04:08.526247 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-02-16 05:04:08.526258 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-16 05:04:08.526294 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-02-16 05:04:08.526306 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-02-16 05:04:08.526317 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-02-16 05:04:08.533421 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-02-16 05:04:08.533489 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-02-16 05:04:08.533509 | orchestrator | + osism apply facts 2026-02-16 05:04:20.604839 | orchestrator | 2026-02-16 05:04:20 | INFO  | Task 6114a6c8-ef14-4028-806d-f5978d75eb79 (facts) was prepared for execution. 2026-02-16 05:04:20.604955 | orchestrator | 2026-02-16 05:04:20 | INFO  | It takes a moment until task 6114a6c8-ef14-4028-806d-f5978d75eb79 (facts) has been started and output is visible here. 2026-02-16 05:04:43.688526 | orchestrator | 2026-02-16 05:04:43.688640 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-16 05:04:43.688656 | orchestrator | 2026-02-16 05:04:43.688669 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-16 05:04:43.688680 | orchestrator | Monday 16 February 2026 05:04:26 +0000 (0:00:01.936) 0:00:01.936 ******* 2026-02-16 05:04:43.688692 | orchestrator | ok: [testbed-manager] 2026-02-16 05:04:43.688704 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:04:43.688715 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:04:43.688726 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:04:43.688737 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:04:43.688747 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:04:43.688758 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:04:43.688769 | orchestrator | 2026-02-16 05:04:43.688780 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-16 05:04:43.688790 | orchestrator | Monday 16 February 2026 05:04:30 +0000 (0:00:03.543) 0:00:05.479 ******* 2026-02-16 05:04:43.688801 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:04:43.688813 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:04:43.688824 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:04:43.688835 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:04:43.688846 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:04:43.688856 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:04:43.688867 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:04:43.688878 | orchestrator | 2026-02-16 05:04:43.688888 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-16 05:04:43.688899 | orchestrator | 2026-02-16 05:04:43.688910 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-16 05:04:43.688921 | orchestrator | Monday 16 February 2026 05:04:33 +0000 (0:00:02.645) 0:00:08.125 ******* 2026-02-16 05:04:43.688931 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:04:43.688963 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:04:43.688975 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:04:43.688986 | orchestrator | ok: [testbed-manager] 2026-02-16 05:04:43.689001 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:04:43.689012 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:04:43.689023 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:04:43.689033 | orchestrator | 2026-02-16 05:04:43.689044 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-16 05:04:43.689055 | orchestrator | 2026-02-16 05:04:43.689067 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-16 05:04:43.689081 | orchestrator | Monday 16 February 2026 05:04:40 +0000 (0:00:07.287) 0:00:15.413 ******* 2026-02-16 05:04:43.689094 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:04:43.689130 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:04:43.689144 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:04:43.689157 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:04:43.689169 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:04:43.689179 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:04:43.689190 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:04:43.689235 | orchestrator | 2026-02-16 05:04:43.689246 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 05:04:43.689258 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 05:04:43.689270 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 05:04:43.689280 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 05:04:43.689291 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 05:04:43.689302 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 05:04:43.689313 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 05:04:43.689324 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 05:04:43.689334 | orchestrator | 2026-02-16 05:04:43.689345 | orchestrator | 2026-02-16 05:04:43.689356 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 05:04:43.689367 | orchestrator | Monday 16 February 2026 05:04:43 +0000 (0:00:02.768) 0:00:18.182 ******* 2026-02-16 05:04:43.689378 | orchestrator | =============================================================================== 2026-02-16 05:04:43.689388 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.29s 2026-02-16 05:04:43.689400 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.54s 2026-02-16 05:04:43.689411 | orchestrator | Gather facts for all hosts ---------------------------------------------- 2.77s 2026-02-16 05:04:43.689422 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.65s 2026-02-16 05:04:43.995619 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-16 05:04:44.098147 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-16 05:04:44.099308 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-16 05:04:44.139116 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-02-16 05:04:44.139241 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-02-16 05:04:44.146323 | orchestrator | + set -e 2026-02-16 05:04:44.146417 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-02-16 05:04:44.146430 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-16 05:04:44.152056 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-02-16 05:04:44.160727 | orchestrator | 2026-02-16 05:04:44.160818 | orchestrator | # UPGRADE SERVICES 2026-02-16 05:04:44.160832 | orchestrator | 2026-02-16 05:04:44.160845 | orchestrator | + set -e 2026-02-16 05:04:44.160856 | orchestrator | + echo 2026-02-16 05:04:44.160867 | orchestrator | + echo '# UPGRADE SERVICES' 2026-02-16 05:04:44.160878 | orchestrator | + echo 2026-02-16 05:04:44.160889 | orchestrator | + source /opt/manager-vars.sh 2026-02-16 05:04:44.161807 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-16 05:04:44.161877 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-16 05:04:44.161888 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-16 05:04:44.161899 | orchestrator | ++ CEPH_VERSION=reef 2026-02-16 05:04:44.161909 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-16 05:04:44.161920 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-16 05:04:44.161930 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-16 05:04:44.161966 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-16 05:04:44.161976 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-16 05:04:44.161986 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-16 05:04:44.161996 | orchestrator | ++ export ARA=false 2026-02-16 05:04:44.162075 | orchestrator | ++ ARA=false 2026-02-16 05:04:44.162087 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-16 05:04:44.162096 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-16 05:04:44.162106 | orchestrator | ++ export TEMPEST=false 2026-02-16 05:04:44.162115 | orchestrator | ++ TEMPEST=false 2026-02-16 05:04:44.162125 | orchestrator | ++ export IS_ZUUL=true 2026-02-16 05:04:44.162134 | orchestrator | ++ IS_ZUUL=true 2026-02-16 05:04:44.162144 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 05:04:44.162154 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 05:04:44.162163 | orchestrator | ++ export EXTERNAL_API=false 2026-02-16 05:04:44.162173 | orchestrator | ++ EXTERNAL_API=false 2026-02-16 05:04:44.162182 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-16 05:04:44.162192 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-16 05:04:44.162221 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-16 05:04:44.162231 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-16 05:04:44.162240 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-16 05:04:44.162250 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-16 05:04:44.162259 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-16 05:04:44.162269 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-16 05:04:44.162278 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-02-16 05:04:44.162287 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-02-16 05:04:44.162297 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-16 05:04:44.170709 | orchestrator | + set -e 2026-02-16 05:04:44.170788 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-16 05:04:44.172341 | orchestrator | 2026-02-16 05:04:44.172405 | orchestrator | # PULL IMAGES 2026-02-16 05:04:44.172416 | orchestrator | 2026-02-16 05:04:44.172425 | orchestrator | ++ export INTERACTIVE=false 2026-02-16 05:04:44.172434 | orchestrator | ++ INTERACTIVE=false 2026-02-16 05:04:44.172441 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-16 05:04:44.172448 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-16 05:04:44.172455 | orchestrator | + source /opt/manager-vars.sh 2026-02-16 05:04:44.172462 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-16 05:04:44.172470 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-16 05:04:44.172477 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-16 05:04:44.172485 | orchestrator | ++ CEPH_VERSION=reef 2026-02-16 05:04:44.172520 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-16 05:04:44.172545 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-16 05:04:44.172553 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-16 05:04:44.172560 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-16 05:04:44.172568 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-16 05:04:44.172575 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-16 05:04:44.172583 | orchestrator | ++ export ARA=false 2026-02-16 05:04:44.172590 | orchestrator | ++ ARA=false 2026-02-16 05:04:44.172597 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-16 05:04:44.172604 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-16 05:04:44.172612 | orchestrator | ++ export TEMPEST=false 2026-02-16 05:04:44.172619 | orchestrator | ++ TEMPEST=false 2026-02-16 05:04:44.172626 | orchestrator | ++ export IS_ZUUL=true 2026-02-16 05:04:44.172633 | orchestrator | ++ IS_ZUUL=true 2026-02-16 05:04:44.172640 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 05:04:44.172648 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.120 2026-02-16 05:04:44.172655 | orchestrator | ++ export EXTERNAL_API=false 2026-02-16 05:04:44.172662 | orchestrator | ++ EXTERNAL_API=false 2026-02-16 05:04:44.172670 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-16 05:04:44.172677 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-16 05:04:44.172685 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-16 05:04:44.172692 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-16 05:04:44.172699 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-16 05:04:44.172706 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-16 05:04:44.172713 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-16 05:04:44.172721 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-16 05:04:44.172728 | orchestrator | + echo 2026-02-16 05:04:44.172735 | orchestrator | + echo '# PULL IMAGES' 2026-02-16 05:04:44.172753 | orchestrator | + echo 2026-02-16 05:04:44.172836 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-16 05:04:44.229616 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-16 05:04:44.229888 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-16 05:04:46.297619 | orchestrator | 2026-02-16 05:04:46 | INFO  | Trying to run play pull-images in environment custom 2026-02-16 05:04:56.402660 | orchestrator | 2026-02-16 05:04:56 | INFO  | Task 488e9dad-0870-48de-91c4-2a5b04c22f6a (pull-images) was prepared for execution. 2026-02-16 05:04:56.402763 | orchestrator | 2026-02-16 05:04:56 | INFO  | Task 488e9dad-0870-48de-91c4-2a5b04c22f6a is running in background. No more output. Check ARA for logs. 2026-02-16 05:04:56.722513 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-02-16 05:04:56.733635 | orchestrator | + set -e 2026-02-16 05:04:56.733727 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-16 05:04:56.733750 | orchestrator | ++ export INTERACTIVE=false 2026-02-16 05:04:56.733770 | orchestrator | ++ INTERACTIVE=false 2026-02-16 05:04:56.733788 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-16 05:04:56.733805 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-16 05:04:56.733823 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-16 05:04:56.735585 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-16 05:04:56.748721 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-16 05:04:56.748783 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-16 05:04:56.749888 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-02-16 05:04:56.798535 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-16 05:04:56.798640 | orchestrator | + osism apply frr 2026-02-16 05:05:08.913799 | orchestrator | 2026-02-16 05:05:08 | INFO  | Task c9bea99c-9673-4e88-b14c-9a9a11020fc4 (frr) was prepared for execution. 2026-02-16 05:05:08.913877 | orchestrator | 2026-02-16 05:05:08 | INFO  | It takes a moment until task c9bea99c-9673-4e88-b14c-9a9a11020fc4 (frr) has been started and output is visible here. 2026-02-16 05:05:30.218799 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-16 05:05:30.218907 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-16 05:05:30.218929 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-16 05:05:30.218938 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-16 05:05:30.218957 | orchestrator | 2026-02-16 05:05:30.218967 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-16 05:05:30.218975 | orchestrator | 2026-02-16 05:05:30.218984 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-16 05:05:30.218993 | orchestrator | Monday 16 February 2026 05:05:16 +0000 (0:00:02.525) 0:00:02.525 ******* 2026-02-16 05:05:30.219002 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-16 05:05:30.219012 | orchestrator | 2026-02-16 05:05:30.219022 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-16 05:05:30.219031 | orchestrator | Monday 16 February 2026 05:05:17 +0000 (0:00:01.186) 0:00:03.712 ******* 2026-02-16 05:05:30.219039 | orchestrator | ok: [testbed-manager] 2026-02-16 05:05:30.219049 | orchestrator | 2026-02-16 05:05:30.219058 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-16 05:05:30.219067 | orchestrator | Monday 16 February 2026 05:05:18 +0000 (0:00:01.385) 0:00:05.097 ******* 2026-02-16 05:05:30.219075 | orchestrator | ok: [testbed-manager] 2026-02-16 05:05:30.219084 | orchestrator | 2026-02-16 05:05:30.219093 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-16 05:05:30.219101 | orchestrator | Monday 16 February 2026 05:05:20 +0000 (0:00:01.923) 0:00:07.020 ******* 2026-02-16 05:05:30.219110 | orchestrator | ok: [testbed-manager] 2026-02-16 05:05:30.219118 | orchestrator | 2026-02-16 05:05:30.219127 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-16 05:05:30.219136 | orchestrator | Monday 16 February 2026 05:05:21 +0000 (0:00:00.938) 0:00:07.959 ******* 2026-02-16 05:05:30.219165 | orchestrator | ok: [testbed-manager] 2026-02-16 05:05:30.219174 | orchestrator | 2026-02-16 05:05:30.219182 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-16 05:05:30.219191 | orchestrator | Monday 16 February 2026 05:05:22 +0000 (0:00:00.984) 0:00:08.943 ******* 2026-02-16 05:05:30.219199 | orchestrator | ok: [testbed-manager] 2026-02-16 05:05:30.219207 | orchestrator | 2026-02-16 05:05:30.219217 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-16 05:05:30.219225 | orchestrator | Monday 16 February 2026 05:05:23 +0000 (0:00:01.413) 0:00:10.356 ******* 2026-02-16 05:05:30.219266 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:05:30.219274 | orchestrator | 2026-02-16 05:05:30.219282 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-16 05:05:30.219290 | orchestrator | Monday 16 February 2026 05:05:24 +0000 (0:00:00.185) 0:00:10.541 ******* 2026-02-16 05:05:30.219299 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:05:30.219307 | orchestrator | 2026-02-16 05:05:30.219316 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-16 05:05:30.219324 | orchestrator | Monday 16 February 2026 05:05:24 +0000 (0:00:00.194) 0:00:10.736 ******* 2026-02-16 05:05:30.219333 | orchestrator | ok: [testbed-manager] 2026-02-16 05:05:30.219345 | orchestrator | 2026-02-16 05:05:30.219357 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-16 05:05:30.219369 | orchestrator | Monday 16 February 2026 05:05:25 +0000 (0:00:01.013) 0:00:11.749 ******* 2026-02-16 05:05:30.219381 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-16 05:05:30.219410 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-16 05:05:30.219423 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-16 05:05:30.219435 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-16 05:05:30.219448 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-16 05:05:30.219459 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-16 05:05:30.219471 | orchestrator | 2026-02-16 05:05:30.219483 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-16 05:05:30.219494 | orchestrator | Monday 16 February 2026 05:05:28 +0000 (0:00:02.761) 0:00:14.510 ******* 2026-02-16 05:05:30.219503 | orchestrator | ok: [testbed-manager] 2026-02-16 05:05:30.219511 | orchestrator | 2026-02-16 05:05:30.219519 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 05:05:30.219528 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-16 05:05:30.219536 | orchestrator | 2026-02-16 05:05:30.219544 | orchestrator | 2026-02-16 05:05:30.219552 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 05:05:30.219560 | orchestrator | Monday 16 February 2026 05:05:29 +0000 (0:00:01.831) 0:00:16.341 ******* 2026-02-16 05:05:30.219569 | orchestrator | =============================================================================== 2026-02-16 05:05:30.219577 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.76s 2026-02-16 05:05:30.219585 | orchestrator | osism.services.frr : Install frr package -------------------------------- 1.92s 2026-02-16 05:05:30.219610 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.83s 2026-02-16 05:05:30.219618 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.41s 2026-02-16 05:05:30.219625 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.39s 2026-02-16 05:05:30.219633 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 1.19s 2026-02-16 05:05:30.219642 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.01s 2026-02-16 05:05:30.219661 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.98s 2026-02-16 05:05:30.219669 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.94s 2026-02-16 05:05:30.219677 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.19s 2026-02-16 05:05:30.219685 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.19s 2026-02-16 05:05:30.565552 | orchestrator | + osism apply kubernetes 2026-02-16 05:05:32.843633 | orchestrator | 2026-02-16 05:05:32 | INFO  | Task a81840a1-c261-4de1-828d-ad6994b7c18b (kubernetes) was prepared for execution. 2026-02-16 05:05:32.843734 | orchestrator | 2026-02-16 05:05:32 | INFO  | It takes a moment until task a81840a1-c261-4de1-828d-ad6994b7c18b (kubernetes) has been started and output is visible here. 2026-02-16 05:06:17.544595 | orchestrator | 2026-02-16 05:06:17.544700 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-16 05:06:17.544716 | orchestrator | 2026-02-16 05:06:17.544728 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-16 05:06:17.544740 | orchestrator | Monday 16 February 2026 05:05:39 +0000 (0:00:02.216) 0:00:02.216 ******* 2026-02-16 05:06:17.544750 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:06:17.544762 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:06:17.544772 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:06:17.544782 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:06:17.544792 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:06:17.544802 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:06:17.544812 | orchestrator | 2026-02-16 05:06:17.544822 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-16 05:06:17.544832 | orchestrator | Monday 16 February 2026 05:05:43 +0000 (0:00:03.998) 0:00:06.214 ******* 2026-02-16 05:06:17.544842 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:06:17.544853 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:06:17.544863 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:06:17.544874 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:06:17.544884 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:06:17.544894 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:06:17.544904 | orchestrator | 2026-02-16 05:06:17.544914 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-16 05:06:17.544924 | orchestrator | Monday 16 February 2026 05:05:45 +0000 (0:00:01.936) 0:00:08.150 ******* 2026-02-16 05:06:17.544935 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:06:17.544945 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:06:17.544955 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:06:17.544965 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:06:17.544975 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:06:17.544985 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:06:17.544995 | orchestrator | 2026-02-16 05:06:17.545005 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-16 05:06:17.545015 | orchestrator | Monday 16 February 2026 05:05:47 +0000 (0:00:01.941) 0:00:10.092 ******* 2026-02-16 05:06:17.545025 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:06:17.545035 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:06:17.545045 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:06:17.545055 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:06:17.545065 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:06:17.545075 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:06:17.545085 | orchestrator | 2026-02-16 05:06:17.545095 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-16 05:06:17.545106 | orchestrator | Monday 16 February 2026 05:05:50 +0000 (0:00:02.697) 0:00:12.789 ******* 2026-02-16 05:06:17.545118 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:06:17.545130 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:06:17.545142 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:06:17.545153 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:06:17.545187 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:06:17.545200 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:06:17.545212 | orchestrator | 2026-02-16 05:06:17.545224 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-16 05:06:17.545235 | orchestrator | Monday 16 February 2026 05:05:52 +0000 (0:00:02.474) 0:00:15.263 ******* 2026-02-16 05:06:17.545248 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:06:17.545259 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:06:17.545271 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:06:17.545311 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:06:17.545328 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:06:17.545346 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:06:17.545362 | orchestrator | 2026-02-16 05:06:17.545378 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-16 05:06:17.545396 | orchestrator | Monday 16 February 2026 05:05:55 +0000 (0:00:02.883) 0:00:18.147 ******* 2026-02-16 05:06:17.545414 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:06:17.545430 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:06:17.545447 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:06:17.545457 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:06:17.545467 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:06:17.545477 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:06:17.545486 | orchestrator | 2026-02-16 05:06:17.545496 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-16 05:06:17.545506 | orchestrator | Monday 16 February 2026 05:05:57 +0000 (0:00:02.126) 0:00:20.273 ******* 2026-02-16 05:06:17.545516 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:06:17.545525 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:06:17.545538 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:06:17.545554 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:06:17.545582 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:06:17.545598 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:06:17.545614 | orchestrator | 2026-02-16 05:06:17.545630 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-16 05:06:17.545645 | orchestrator | Monday 16 February 2026 05:05:59 +0000 (0:00:01.764) 0:00:22.038 ******* 2026-02-16 05:06:17.545660 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 05:06:17.545676 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 05:06:17.545693 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:06:17.545709 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 05:06:17.545726 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 05:06:17.545736 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:06:17.545746 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 05:06:17.545755 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 05:06:17.545765 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:06:17.545775 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 05:06:17.545784 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 05:06:17.545794 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:06:17.545822 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 05:06:17.545832 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 05:06:17.545842 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:06:17.545852 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-16 05:06:17.545861 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-16 05:06:17.545870 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:06:17.545880 | orchestrator | 2026-02-16 05:06:17.545901 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-16 05:06:17.545910 | orchestrator | Monday 16 February 2026 05:06:01 +0000 (0:00:02.089) 0:00:24.127 ******* 2026-02-16 05:06:17.545920 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:06:17.545929 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:06:17.545939 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:06:17.545948 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:06:17.545958 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:06:17.545967 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:06:17.545977 | orchestrator | 2026-02-16 05:06:17.545986 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-16 05:06:17.545997 | orchestrator | Monday 16 February 2026 05:06:03 +0000 (0:00:02.085) 0:00:26.213 ******* 2026-02-16 05:06:17.546007 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:06:17.546091 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:06:17.546104 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:06:17.546114 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:06:17.546124 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:06:17.546133 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:06:17.546143 | orchestrator | 2026-02-16 05:06:17.546153 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-16 05:06:17.546163 | orchestrator | Monday 16 February 2026 05:06:05 +0000 (0:00:01.951) 0:00:28.164 ******* 2026-02-16 05:06:17.546172 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:06:17.546182 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:06:17.546191 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:06:17.546201 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:06:17.546210 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:06:17.546219 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:06:17.546229 | orchestrator | 2026-02-16 05:06:17.546239 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-16 05:06:17.546248 | orchestrator | Monday 16 February 2026 05:06:08 +0000 (0:00:03.009) 0:00:31.174 ******* 2026-02-16 05:06:17.546258 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:06:17.546268 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:06:17.546347 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:06:17.546362 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:06:17.546372 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:06:17.546381 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:06:17.546391 | orchestrator | 2026-02-16 05:06:17.546401 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-16 05:06:17.546411 | orchestrator | Monday 16 February 2026 05:06:10 +0000 (0:00:02.134) 0:00:33.308 ******* 2026-02-16 05:06:17.546420 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:06:17.546430 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:06:17.546439 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:06:17.546449 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:06:17.546458 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:06:17.546468 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:06:17.546478 | orchestrator | 2026-02-16 05:06:17.546488 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-16 05:06:17.546500 | orchestrator | Monday 16 February 2026 05:06:12 +0000 (0:00:02.178) 0:00:35.487 ******* 2026-02-16 05:06:17.546510 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:06:17.546523 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:06:17.546533 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:06:17.546543 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:06:17.546552 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:06:17.546562 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:06:17.546571 | orchestrator | 2026-02-16 05:06:17.546581 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-16 05:06:17.546591 | orchestrator | Monday 16 February 2026 05:06:14 +0000 (0:00:01.888) 0:00:37.375 ******* 2026-02-16 05:06:17.546610 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-16 05:06:17.546619 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-16 05:06:17.546629 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:06:17.546639 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-16 05:06:17.546648 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-16 05:06:17.546658 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:06:17.546667 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-16 05:06:17.546677 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-16 05:06:17.546687 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:06:17.546696 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-16 05:06:17.546706 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-16 05:06:17.546716 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:06:17.546725 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-16 05:06:17.546735 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-16 05:06:17.546744 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:06:17.546754 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-16 05:06:17.546763 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-16 05:06:17.546773 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:06:17.546783 | orchestrator | 2026-02-16 05:06:17.546793 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-16 05:06:17.546802 | orchestrator | Monday 16 February 2026 05:06:16 +0000 (0:00:02.216) 0:00:39.592 ******* 2026-02-16 05:06:17.546812 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:06:17.546822 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:06:17.546841 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:07:54.409821 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:07:54.409980 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:07:54.410067 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:07:54.410093 | orchestrator | 2026-02-16 05:07:54.410114 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-16 05:07:54.410135 | orchestrator | Monday 16 February 2026 05:06:19 +0000 (0:00:02.032) 0:00:41.625 ******* 2026-02-16 05:07:54.410152 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:07:54.410171 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:07:54.410189 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:07:54.410207 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:07:54.410226 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:07:54.410246 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:07:54.410265 | orchestrator | 2026-02-16 05:07:54.410285 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-16 05:07:54.410303 | orchestrator | 2026-02-16 05:07:54.410322 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-16 05:07:54.410342 | orchestrator | Monday 16 February 2026 05:06:21 +0000 (0:00:02.822) 0:00:44.447 ******* 2026-02-16 05:07:54.410390 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:07:54.410412 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:07:54.410457 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:07:54.410479 | orchestrator | 2026-02-16 05:07:54.410506 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-16 05:07:54.410525 | orchestrator | Monday 16 February 2026 05:06:23 +0000 (0:00:01.830) 0:00:46.278 ******* 2026-02-16 05:07:54.410543 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:07:54.410563 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:07:54.410583 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:07:54.410606 | orchestrator | 2026-02-16 05:07:54.410626 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-16 05:07:54.410648 | orchestrator | Monday 16 February 2026 05:06:25 +0000 (0:00:02.111) 0:00:48.389 ******* 2026-02-16 05:07:54.410698 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:07:54.410720 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:07:54.410740 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:07:54.410760 | orchestrator | 2026-02-16 05:07:54.410778 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-16 05:07:54.410797 | orchestrator | Monday 16 February 2026 05:06:27 +0000 (0:00:02.120) 0:00:50.510 ******* 2026-02-16 05:07:54.410815 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:07:54.410833 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:07:54.410852 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:07:54.410870 | orchestrator | 2026-02-16 05:07:54.410887 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-16 05:07:54.410903 | orchestrator | Monday 16 February 2026 05:06:29 +0000 (0:00:01.948) 0:00:52.458 ******* 2026-02-16 05:07:54.410920 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:07:54.410938 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:07:54.410956 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:07:54.410975 | orchestrator | 2026-02-16 05:07:54.410994 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-16 05:07:54.411012 | orchestrator | Monday 16 February 2026 05:06:31 +0000 (0:00:01.505) 0:00:53.964 ******* 2026-02-16 05:07:54.411031 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:07:54.411048 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:07:54.411067 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:07:54.411085 | orchestrator | 2026-02-16 05:07:54.411104 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-16 05:07:54.411123 | orchestrator | Monday 16 February 2026 05:06:33 +0000 (0:00:01.748) 0:00:55.713 ******* 2026-02-16 05:07:54.411141 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:07:54.411159 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:07:54.411176 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:07:54.411194 | orchestrator | 2026-02-16 05:07:54.411214 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-16 05:07:54.411232 | orchestrator | Monday 16 February 2026 05:06:35 +0000 (0:00:02.252) 0:00:57.965 ******* 2026-02-16 05:07:54.411250 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:07:54.411269 | orchestrator | 2026-02-16 05:07:54.411288 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-16 05:07:54.411345 | orchestrator | Monday 16 February 2026 05:06:37 +0000 (0:00:01.923) 0:00:59.889 ******* 2026-02-16 05:07:54.411391 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:07:54.411410 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:07:54.411428 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:07:54.411446 | orchestrator | 2026-02-16 05:07:54.411464 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-16 05:07:54.411483 | orchestrator | Monday 16 February 2026 05:06:39 +0000 (0:00:02.493) 0:01:02.383 ******* 2026-02-16 05:07:54.411502 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:07:54.411520 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:07:54.411538 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:07:54.411554 | orchestrator | 2026-02-16 05:07:54.411570 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-16 05:07:54.411589 | orchestrator | Monday 16 February 2026 05:06:41 +0000 (0:00:01.710) 0:01:04.093 ******* 2026-02-16 05:07:54.411606 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:07:54.411623 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:07:54.411642 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:07:54.411660 | orchestrator | 2026-02-16 05:07:54.411677 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-16 05:07:54.411695 | orchestrator | Monday 16 February 2026 05:06:43 +0000 (0:00:01.918) 0:01:06.011 ******* 2026-02-16 05:07:54.411713 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:07:54.411732 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:07:54.411750 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:07:54.411789 | orchestrator | 2026-02-16 05:07:54.411808 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-16 05:07:54.411825 | orchestrator | Monday 16 February 2026 05:06:45 +0000 (0:00:02.496) 0:01:08.508 ******* 2026-02-16 05:07:54.411842 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:07:54.411860 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:07:54.411909 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:07:54.411928 | orchestrator | 2026-02-16 05:07:54.411947 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-16 05:07:54.411964 | orchestrator | Monday 16 February 2026 05:06:47 +0000 (0:00:01.394) 0:01:09.902 ******* 2026-02-16 05:07:54.411983 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:07:54.411999 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:07:54.412015 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:07:54.412034 | orchestrator | 2026-02-16 05:07:54.412051 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-16 05:07:54.412068 | orchestrator | Monday 16 February 2026 05:06:48 +0000 (0:00:01.601) 0:01:11.504 ******* 2026-02-16 05:07:54.412086 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:07:54.412103 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:07:54.412121 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:07:54.412139 | orchestrator | 2026-02-16 05:07:54.412156 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-16 05:07:54.412173 | orchestrator | Monday 16 February 2026 05:06:51 +0000 (0:00:02.095) 0:01:13.599 ******* 2026-02-16 05:07:54.412191 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:07:54.412208 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:07:54.412225 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:07:54.412243 | orchestrator | 2026-02-16 05:07:54.412260 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-16 05:07:54.412278 | orchestrator | Monday 16 February 2026 05:06:52 +0000 (0:00:01.918) 0:01:15.518 ******* 2026-02-16 05:07:54.412296 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:07:54.412313 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:07:54.412329 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:07:54.412345 | orchestrator | 2026-02-16 05:07:54.412405 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-16 05:07:54.412425 | orchestrator | Monday 16 February 2026 05:06:54 +0000 (0:00:01.447) 0:01:16.965 ******* 2026-02-16 05:07:54.412443 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-16 05:07:54.412463 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-16 05:07:54.412480 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-16 05:07:54.412497 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-16 05:07:54.412515 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-16 05:07:54.412533 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-16 05:07:54.412550 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:07:54.412567 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:07:54.412585 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:07:54.412603 | orchestrator | 2026-02-16 05:07:54.412621 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-16 05:07:54.412638 | orchestrator | Monday 16 February 2026 05:07:17 +0000 (0:00:23.396) 0:01:40.362 ******* 2026-02-16 05:07:54.412655 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:07:54.412672 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:07:54.412708 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:07:54.412725 | orchestrator | 2026-02-16 05:07:54.412742 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-16 05:07:54.412760 | orchestrator | Monday 16 February 2026 05:07:19 +0000 (0:00:01.321) 0:01:41.683 ******* 2026-02-16 05:07:54.412777 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:07:54.412795 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:07:54.412813 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:07:54.412831 | orchestrator | 2026-02-16 05:07:54.412849 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-16 05:07:54.412867 | orchestrator | Monday 16 February 2026 05:07:21 +0000 (0:00:02.155) 0:01:43.839 ******* 2026-02-16 05:07:54.412886 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:07:54.412902 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:07:54.412919 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:07:54.412938 | orchestrator | 2026-02-16 05:07:54.412956 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-16 05:07:54.412974 | orchestrator | Monday 16 February 2026 05:07:23 +0000 (0:00:02.260) 0:01:46.100 ******* 2026-02-16 05:07:54.412992 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:07:54.413011 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:07:54.413028 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:07:54.413045 | orchestrator | 2026-02-16 05:07:54.413063 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-16 05:07:54.413081 | orchestrator | Monday 16 February 2026 05:07:49 +0000 (0:00:25.506) 0:02:11.606 ******* 2026-02-16 05:07:54.413098 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:07:54.413117 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:07:54.413136 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:07:54.413155 | orchestrator | 2026-02-16 05:07:54.413192 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-16 05:07:54.413212 | orchestrator | Monday 16 February 2026 05:07:50 +0000 (0:00:01.716) 0:02:13.323 ******* 2026-02-16 05:07:54.413231 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:07:54.413249 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:07:54.413269 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:07:54.413289 | orchestrator | 2026-02-16 05:07:54.413307 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-16 05:07:54.413327 | orchestrator | Monday 16 February 2026 05:07:52 +0000 (0:00:01.699) 0:02:15.022 ******* 2026-02-16 05:07:54.413347 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:07:54.413414 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:07:54.413433 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:07:54.413453 | orchestrator | 2026-02-16 05:07:54.413499 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-16 05:08:43.373300 | orchestrator | Monday 16 February 2026 05:07:54 +0000 (0:00:01.965) 0:02:16.988 ******* 2026-02-16 05:08:43.373461 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:08:43.373489 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:08:43.373507 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:08:43.373525 | orchestrator | 2026-02-16 05:08:43.373545 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-16 05:08:43.373564 | orchestrator | Monday 16 February 2026 05:07:56 +0000 (0:00:01.774) 0:02:18.762 ******* 2026-02-16 05:08:43.373582 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:08:43.373600 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:08:43.373616 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:08:43.373633 | orchestrator | 2026-02-16 05:08:43.373652 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-16 05:08:43.373671 | orchestrator | Monday 16 February 2026 05:07:57 +0000 (0:00:01.318) 0:02:20.080 ******* 2026-02-16 05:08:43.373691 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:08:43.373712 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:08:43.373731 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:08:43.373749 | orchestrator | 2026-02-16 05:08:43.373760 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-16 05:08:43.373792 | orchestrator | Monday 16 February 2026 05:07:59 +0000 (0:00:01.762) 0:02:21.842 ******* 2026-02-16 05:08:43.373812 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:08:43.373823 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:08:43.373834 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:08:43.373844 | orchestrator | 2026-02-16 05:08:43.373855 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-16 05:08:43.373866 | orchestrator | Monday 16 February 2026 05:08:01 +0000 (0:00:02.071) 0:02:23.914 ******* 2026-02-16 05:08:43.373876 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:08:43.373887 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:08:43.373898 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:08:43.373908 | orchestrator | 2026-02-16 05:08:43.373919 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-16 05:08:43.373930 | orchestrator | Monday 16 February 2026 05:08:03 +0000 (0:00:01.868) 0:02:25.782 ******* 2026-02-16 05:08:43.373946 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:08:43.373964 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:08:43.373982 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:08:43.373999 | orchestrator | 2026-02-16 05:08:43.374095 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-16 05:08:43.374118 | orchestrator | Monday 16 February 2026 05:08:05 +0000 (0:00:01.986) 0:02:27.768 ******* 2026-02-16 05:08:43.374135 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:08:43.374155 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:08:43.374173 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:08:43.374191 | orchestrator | 2026-02-16 05:08:43.374211 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-16 05:08:43.374229 | orchestrator | Monday 16 February 2026 05:08:06 +0000 (0:00:01.385) 0:02:29.153 ******* 2026-02-16 05:08:43.374247 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:08:43.374262 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:08:43.374274 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:08:43.374285 | orchestrator | 2026-02-16 05:08:43.374296 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-16 05:08:43.374306 | orchestrator | Monday 16 February 2026 05:08:07 +0000 (0:00:01.378) 0:02:30.533 ******* 2026-02-16 05:08:43.374317 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:08:43.374328 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:08:43.374339 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:08:43.374349 | orchestrator | 2026-02-16 05:08:43.374360 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-16 05:08:43.374371 | orchestrator | Monday 16 February 2026 05:08:09 +0000 (0:00:01.669) 0:02:32.202 ******* 2026-02-16 05:08:43.374443 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:08:43.374456 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:08:43.374467 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:08:43.374477 | orchestrator | 2026-02-16 05:08:43.374489 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-16 05:08:43.374501 | orchestrator | Monday 16 February 2026 05:08:11 +0000 (0:00:01.741) 0:02:33.943 ******* 2026-02-16 05:08:43.374512 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-16 05:08:43.374523 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-16 05:08:43.374534 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-16 05:08:43.374545 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-16 05:08:43.374555 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-16 05:08:43.374566 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-16 05:08:43.374590 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-16 05:08:43.374601 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-16 05:08:43.374611 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-16 05:08:43.374622 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-16 05:08:43.374633 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-16 05:08:43.374643 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-16 05:08:43.374676 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-16 05:08:43.374688 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-16 05:08:43.374698 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-16 05:08:43.374709 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-16 05:08:43.374720 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-16 05:08:43.374730 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-16 05:08:43.374741 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-16 05:08:43.374752 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-16 05:08:43.374762 | orchestrator | 2026-02-16 05:08:43.374773 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-16 05:08:43.374783 | orchestrator | 2026-02-16 05:08:43.374795 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-16 05:08:43.374805 | orchestrator | Monday 16 February 2026 05:08:16 +0000 (0:00:04.664) 0:02:38.608 ******* 2026-02-16 05:08:43.374816 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:08:43.374835 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:08:43.374853 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:08:43.374870 | orchestrator | 2026-02-16 05:08:43.374889 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-16 05:08:43.374906 | orchestrator | Monday 16 February 2026 05:08:17 +0000 (0:00:01.375) 0:02:39.984 ******* 2026-02-16 05:08:43.374923 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:08:43.374941 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:08:43.374958 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:08:43.374977 | orchestrator | 2026-02-16 05:08:43.374996 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-16 05:08:43.375017 | orchestrator | Monday 16 February 2026 05:08:19 +0000 (0:00:01.807) 0:02:41.791 ******* 2026-02-16 05:08:43.375036 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:08:43.375053 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:08:43.375072 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:08:43.375084 | orchestrator | 2026-02-16 05:08:43.375095 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-16 05:08:43.375106 | orchestrator | Monday 16 February 2026 05:08:20 +0000 (0:00:01.723) 0:02:43.515 ******* 2026-02-16 05:08:43.375117 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 05:08:43.375127 | orchestrator | 2026-02-16 05:08:43.375138 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-16 05:08:43.375149 | orchestrator | Monday 16 February 2026 05:08:22 +0000 (0:00:01.694) 0:02:45.209 ******* 2026-02-16 05:08:43.375159 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:08:43.375170 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:08:43.375181 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:08:43.375201 | orchestrator | 2026-02-16 05:08:43.375212 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-16 05:08:43.375223 | orchestrator | Monday 16 February 2026 05:08:23 +0000 (0:00:01.364) 0:02:46.574 ******* 2026-02-16 05:08:43.375233 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:08:43.375244 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:08:43.375254 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:08:43.375265 | orchestrator | 2026-02-16 05:08:43.375276 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-16 05:08:43.375286 | orchestrator | Monday 16 February 2026 05:08:25 +0000 (0:00:01.337) 0:02:47.911 ******* 2026-02-16 05:08:43.375297 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:08:43.375308 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:08:43.375318 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:08:43.375329 | orchestrator | 2026-02-16 05:08:43.375340 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-16 05:08:43.375350 | orchestrator | Monday 16 February 2026 05:08:26 +0000 (0:00:01.385) 0:02:49.296 ******* 2026-02-16 05:08:43.375361 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:08:43.375371 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:08:43.375445 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:08:43.375466 | orchestrator | 2026-02-16 05:08:43.375482 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-16 05:08:43.375513 | orchestrator | Monday 16 February 2026 05:08:28 +0000 (0:00:01.690) 0:02:50.986 ******* 2026-02-16 05:08:43.375532 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:08:43.375551 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:08:43.375569 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:08:43.375587 | orchestrator | 2026-02-16 05:08:43.375605 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-16 05:08:43.375624 | orchestrator | Monday 16 February 2026 05:08:30 +0000 (0:00:02.405) 0:02:53.392 ******* 2026-02-16 05:08:43.375643 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:08:43.375661 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:08:43.375679 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:08:43.375691 | orchestrator | 2026-02-16 05:08:43.375700 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-16 05:08:43.375710 | orchestrator | Monday 16 February 2026 05:08:33 +0000 (0:00:02.390) 0:02:55.783 ******* 2026-02-16 05:08:43.375719 | orchestrator | changed: [testbed-node-3] 2026-02-16 05:08:43.375729 | orchestrator | changed: [testbed-node-5] 2026-02-16 05:08:43.375738 | orchestrator | changed: [testbed-node-4] 2026-02-16 05:08:43.375748 | orchestrator | 2026-02-16 05:08:43.375757 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-16 05:08:43.375767 | orchestrator | 2026-02-16 05:08:43.375776 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-16 05:08:43.375786 | orchestrator | Monday 16 February 2026 05:08:41 +0000 (0:00:08.031) 0:03:03.814 ******* 2026-02-16 05:08:43.375796 | orchestrator | ok: [testbed-manager] 2026-02-16 05:08:43.375809 | orchestrator | 2026-02-16 05:08:43.375824 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-16 05:08:43.375854 | orchestrator | Monday 16 February 2026 05:08:43 +0000 (0:00:02.139) 0:03:05.953 ******* 2026-02-16 05:09:52.971610 | orchestrator | ok: [testbed-manager] 2026-02-16 05:09:52.971730 | orchestrator | 2026-02-16 05:09:52.971747 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-16 05:09:52.971761 | orchestrator | Monday 16 February 2026 05:08:44 +0000 (0:00:01.471) 0:03:07.425 ******* 2026-02-16 05:09:52.971773 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-16 05:09:52.971784 | orchestrator | 2026-02-16 05:09:52.971796 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-16 05:09:52.971807 | orchestrator | Monday 16 February 2026 05:08:46 +0000 (0:00:01.591) 0:03:09.017 ******* 2026-02-16 05:09:52.971818 | orchestrator | changed: [testbed-manager] 2026-02-16 05:09:52.971853 | orchestrator | 2026-02-16 05:09:52.971865 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-16 05:09:52.971876 | orchestrator | Monday 16 February 2026 05:08:48 +0000 (0:00:01.998) 0:03:11.015 ******* 2026-02-16 05:09:52.971887 | orchestrator | changed: [testbed-manager] 2026-02-16 05:09:52.971898 | orchestrator | 2026-02-16 05:09:52.971908 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-16 05:09:52.971935 | orchestrator | Monday 16 February 2026 05:08:50 +0000 (0:00:01.601) 0:03:12.617 ******* 2026-02-16 05:09:52.971946 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-16 05:09:52.971957 | orchestrator | 2026-02-16 05:09:52.971968 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-16 05:09:52.971979 | orchestrator | Monday 16 February 2026 05:08:52 +0000 (0:00:02.969) 0:03:15.586 ******* 2026-02-16 05:09:52.971989 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-16 05:09:52.972000 | orchestrator | 2026-02-16 05:09:52.972010 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-16 05:09:52.972021 | orchestrator | Monday 16 February 2026 05:08:54 +0000 (0:00:01.809) 0:03:17.396 ******* 2026-02-16 05:09:52.972032 | orchestrator | ok: [testbed-manager] 2026-02-16 05:09:52.972043 | orchestrator | 2026-02-16 05:09:52.972054 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-16 05:09:52.972065 | orchestrator | Monday 16 February 2026 05:08:56 +0000 (0:00:01.477) 0:03:18.873 ******* 2026-02-16 05:09:52.972075 | orchestrator | ok: [testbed-manager] 2026-02-16 05:09:52.972086 | orchestrator | 2026-02-16 05:09:52.972097 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-16 05:09:52.972108 | orchestrator | 2026-02-16 05:09:52.972118 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-16 05:09:52.972129 | orchestrator | Monday 16 February 2026 05:08:57 +0000 (0:00:01.537) 0:03:20.411 ******* 2026-02-16 05:09:52.972142 | orchestrator | ok: [testbed-manager] 2026-02-16 05:09:52.972155 | orchestrator | 2026-02-16 05:09:52.972168 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-16 05:09:52.972180 | orchestrator | Monday 16 February 2026 05:08:59 +0000 (0:00:01.200) 0:03:21.612 ******* 2026-02-16 05:09:52.972192 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-16 05:09:52.972206 | orchestrator | 2026-02-16 05:09:52.972219 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-16 05:09:52.972231 | orchestrator | Monday 16 February 2026 05:09:00 +0000 (0:00:01.435) 0:03:23.047 ******* 2026-02-16 05:09:52.972243 | orchestrator | ok: [testbed-manager] 2026-02-16 05:09:52.972256 | orchestrator | 2026-02-16 05:09:52.972268 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-16 05:09:52.972281 | orchestrator | Monday 16 February 2026 05:09:02 +0000 (0:00:01.847) 0:03:24.894 ******* 2026-02-16 05:09:52.972293 | orchestrator | ok: [testbed-manager] 2026-02-16 05:09:52.972306 | orchestrator | 2026-02-16 05:09:52.972319 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-16 05:09:52.972332 | orchestrator | Monday 16 February 2026 05:09:04 +0000 (0:00:02.637) 0:03:27.532 ******* 2026-02-16 05:09:52.972344 | orchestrator | ok: [testbed-manager] 2026-02-16 05:09:52.972357 | orchestrator | 2026-02-16 05:09:52.972370 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-16 05:09:52.972382 | orchestrator | Monday 16 February 2026 05:09:06 +0000 (0:00:01.442) 0:03:28.975 ******* 2026-02-16 05:09:52.972396 | orchestrator | ok: [testbed-manager] 2026-02-16 05:09:52.972409 | orchestrator | 2026-02-16 05:09:52.972482 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-16 05:09:52.972496 | orchestrator | Monday 16 February 2026 05:09:07 +0000 (0:00:01.467) 0:03:30.442 ******* 2026-02-16 05:09:52.972507 | orchestrator | ok: [testbed-manager] 2026-02-16 05:09:52.972517 | orchestrator | 2026-02-16 05:09:52.972528 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-16 05:09:52.972548 | orchestrator | Monday 16 February 2026 05:09:09 +0000 (0:00:01.662) 0:03:32.105 ******* 2026-02-16 05:09:52.972558 | orchestrator | ok: [testbed-manager] 2026-02-16 05:09:52.972569 | orchestrator | 2026-02-16 05:09:52.972580 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-16 05:09:52.972591 | orchestrator | Monday 16 February 2026 05:09:12 +0000 (0:00:02.545) 0:03:34.650 ******* 2026-02-16 05:09:52.972601 | orchestrator | ok: [testbed-manager] 2026-02-16 05:09:52.972612 | orchestrator | 2026-02-16 05:09:52.972623 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-16 05:09:52.972633 | orchestrator | 2026-02-16 05:09:52.972644 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-16 05:09:52.972654 | orchestrator | Monday 16 February 2026 05:09:13 +0000 (0:00:01.714) 0:03:36.364 ******* 2026-02-16 05:09:52.972665 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:09:52.972676 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:09:52.972687 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:09:52.972697 | orchestrator | 2026-02-16 05:09:52.972708 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-16 05:09:52.972719 | orchestrator | Monday 16 February 2026 05:09:15 +0000 (0:00:01.367) 0:03:37.732 ******* 2026-02-16 05:09:52.972729 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:09:52.972740 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:09:52.972751 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:09:52.972761 | orchestrator | 2026-02-16 05:09:52.972790 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-16 05:09:52.972801 | orchestrator | Monday 16 February 2026 05:09:16 +0000 (0:00:01.604) 0:03:39.337 ******* 2026-02-16 05:09:52.972812 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:09:52.972824 | orchestrator | 2026-02-16 05:09:52.972835 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-16 05:09:52.972846 | orchestrator | Monday 16 February 2026 05:09:18 +0000 (0:00:01.732) 0:03:41.070 ******* 2026-02-16 05:09:52.972856 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-16 05:09:52.972868 | orchestrator | 2026-02-16 05:09:52.972879 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-16 05:09:52.972889 | orchestrator | Monday 16 February 2026 05:09:20 +0000 (0:00:01.897) 0:03:42.968 ******* 2026-02-16 05:09:52.972900 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 05:09:52.972911 | orchestrator | 2026-02-16 05:09:52.972922 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-16 05:09:52.972932 | orchestrator | Monday 16 February 2026 05:09:22 +0000 (0:00:01.919) 0:03:44.888 ******* 2026-02-16 05:09:52.972943 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:09:52.972954 | orchestrator | 2026-02-16 05:09:52.972965 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-16 05:09:52.972976 | orchestrator | Monday 16 February 2026 05:09:23 +0000 (0:00:01.131) 0:03:46.020 ******* 2026-02-16 05:09:52.972986 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 05:09:52.972997 | orchestrator | 2026-02-16 05:09:52.973008 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-16 05:09:52.973019 | orchestrator | Monday 16 February 2026 05:09:25 +0000 (0:00:02.005) 0:03:48.025 ******* 2026-02-16 05:09:52.973029 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 05:09:52.973040 | orchestrator | 2026-02-16 05:09:52.973051 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-16 05:09:52.973062 | orchestrator | Monday 16 February 2026 05:09:27 +0000 (0:00:02.121) 0:03:50.147 ******* 2026-02-16 05:09:52.973072 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 05:09:52.973083 | orchestrator | 2026-02-16 05:09:52.973210 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-16 05:09:52.973225 | orchestrator | Monday 16 February 2026 05:09:28 +0000 (0:00:01.206) 0:03:51.354 ******* 2026-02-16 05:09:52.973249 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 05:09:52.973260 | orchestrator | 2026-02-16 05:09:52.973271 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-16 05:09:52.973281 | orchestrator | Monday 16 February 2026 05:09:29 +0000 (0:00:01.135) 0:03:52.490 ******* 2026-02-16 05:09:52.973292 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-02-16 05:09:52.973303 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-02-16 05:09:52.973315 | orchestrator | } 2026-02-16 05:09:52.973326 | orchestrator | 2026-02-16 05:09:52.973337 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-16 05:09:52.973347 | orchestrator | Monday 16 February 2026 05:09:31 +0000 (0:00:01.211) 0:03:53.701 ******* 2026-02-16 05:09:52.973358 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:09:52.973369 | orchestrator | 2026-02-16 05:09:52.973379 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-16 05:09:52.973390 | orchestrator | Monday 16 February 2026 05:09:32 +0000 (0:00:01.130) 0:03:54.832 ******* 2026-02-16 05:09:52.973401 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-16 05:09:52.973412 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-16 05:09:52.973440 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-16 05:09:52.973452 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-16 05:09:52.973462 | orchestrator | 2026-02-16 05:09:52.973473 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-16 05:09:52.973484 | orchestrator | Monday 16 February 2026 05:09:37 +0000 (0:00:05.714) 0:04:00.546 ******* 2026-02-16 05:09:52.973495 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-16 05:09:52.973505 | orchestrator | 2026-02-16 05:09:52.973561 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-16 05:09:52.973572 | orchestrator | Monday 16 February 2026 05:09:40 +0000 (0:00:02.431) 0:04:02.978 ******* 2026-02-16 05:09:52.973583 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-16 05:09:52.973594 | orchestrator | 2026-02-16 05:09:52.973605 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-16 05:09:52.973616 | orchestrator | Monday 16 February 2026 05:09:43 +0000 (0:00:02.630) 0:04:05.609 ******* 2026-02-16 05:09:52.973627 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-16 05:09:52.973638 | orchestrator | 2026-02-16 05:09:52.973649 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-16 05:09:52.973670 | orchestrator | Monday 16 February 2026 05:09:47 +0000 (0:00:04.416) 0:04:10.025 ******* 2026-02-16 05:09:52.973681 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:09:52.973692 | orchestrator | 2026-02-16 05:09:52.973703 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-16 05:09:52.973714 | orchestrator | Monday 16 February 2026 05:09:48 +0000 (0:00:01.169) 0:04:11.194 ******* 2026-02-16 05:09:52.973724 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-16 05:09:52.973736 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-16 05:09:52.973746 | orchestrator | 2026-02-16 05:09:52.973757 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-16 05:09:52.973768 | orchestrator | Monday 16 February 2026 05:09:51 +0000 (0:00:02.853) 0:04:14.048 ******* 2026-02-16 05:09:52.973779 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:09:52.973800 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:10:20.231192 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:10:20.231305 | orchestrator | 2026-02-16 05:10:20.231322 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-16 05:10:20.231335 | orchestrator | Monday 16 February 2026 05:09:52 +0000 (0:00:01.505) 0:04:15.554 ******* 2026-02-16 05:10:20.231378 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:10:20.231390 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:10:20.231401 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:10:20.231426 | orchestrator | 2026-02-16 05:10:20.231468 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-16 05:10:20.231480 | orchestrator | 2026-02-16 05:10:20.231491 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-16 05:10:20.231502 | orchestrator | Monday 16 February 2026 05:09:55 +0000 (0:00:02.088) 0:04:17.643 ******* 2026-02-16 05:10:20.231513 | orchestrator | ok: [testbed-manager] 2026-02-16 05:10:20.231527 | orchestrator | 2026-02-16 05:10:20.231545 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-16 05:10:20.231563 | orchestrator | Monday 16 February 2026 05:09:56 +0000 (0:00:01.220) 0:04:18.863 ******* 2026-02-16 05:10:20.231599 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-16 05:10:20.231692 | orchestrator | 2026-02-16 05:10:20.231705 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-16 05:10:20.231716 | orchestrator | Monday 16 February 2026 05:09:57 +0000 (0:00:01.513) 0:04:20.377 ******* 2026-02-16 05:10:20.231727 | orchestrator | ok: [testbed-manager] 2026-02-16 05:10:20.231740 | orchestrator | 2026-02-16 05:10:20.231752 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-16 05:10:20.231765 | orchestrator | 2026-02-16 05:10:20.231778 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-16 05:10:20.231790 | orchestrator | Monday 16 February 2026 05:10:02 +0000 (0:00:05.023) 0:04:25.401 ******* 2026-02-16 05:10:20.231803 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:10:20.231815 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:10:20.231828 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:10:20.231840 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:10:20.231852 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:10:20.231864 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:10:20.231876 | orchestrator | 2026-02-16 05:10:20.231893 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-16 05:10:20.231911 | orchestrator | Monday 16 February 2026 05:10:04 +0000 (0:00:01.890) 0:04:27.292 ******* 2026-02-16 05:10:20.231924 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-16 05:10:20.231936 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-16 05:10:20.231948 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-16 05:10:20.231960 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-16 05:10:20.231973 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-16 05:10:20.231985 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-16 05:10:20.231996 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-16 05:10:20.232008 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-16 05:10:20.232020 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-16 05:10:20.232032 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-16 05:10:20.232044 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-16 05:10:20.232056 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-16 05:10:20.232068 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-16 05:10:20.232080 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-16 05:10:20.232092 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-16 05:10:20.232127 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-16 05:10:20.232144 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-16 05:10:20.232160 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-16 05:10:20.232171 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-16 05:10:20.232181 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-16 05:10:20.232192 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-16 05:10:20.232202 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-16 05:10:20.232213 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-16 05:10:20.232223 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-16 05:10:20.232233 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-16 05:10:20.232244 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-16 05:10:20.232275 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-16 05:10:20.232365 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-16 05:10:20.232381 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-16 05:10:20.232392 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-16 05:10:20.232410 | orchestrator | 2026-02-16 05:10:20.232424 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-16 05:10:20.232456 | orchestrator | Monday 16 February 2026 05:10:15 +0000 (0:00:10.905) 0:04:38.197 ******* 2026-02-16 05:10:20.232467 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:10:20.232479 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:10:20.232490 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:10:20.232500 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:10:20.232511 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:10:20.232553 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:10:20.232565 | orchestrator | 2026-02-16 05:10:20.232575 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-16 05:10:20.232594 | orchestrator | Monday 16 February 2026 05:10:17 +0000 (0:00:01.999) 0:04:40.197 ******* 2026-02-16 05:10:20.232605 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:10:20.232620 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:10:20.232636 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:10:20.232646 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:10:20.232657 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:10:20.232667 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:10:20.232678 | orchestrator | 2026-02-16 05:10:20.232688 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 05:10:20.232699 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 05:10:20.232712 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-16 05:10:20.232723 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-16 05:10:20.232734 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-16 05:10:20.232744 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-16 05:10:20.232764 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-16 05:10:20.232775 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-16 05:10:20.232785 | orchestrator | 2026-02-16 05:10:20.232796 | orchestrator | 2026-02-16 05:10:20.232842 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 05:10:20.232854 | orchestrator | Monday 16 February 2026 05:10:20 +0000 (0:00:02.590) 0:04:42.787 ******* 2026-02-16 05:10:20.232865 | orchestrator | =============================================================================== 2026-02-16 05:10:20.232875 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.51s 2026-02-16 05:10:20.232886 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 23.40s 2026-02-16 05:10:20.232898 | orchestrator | Manage labels ---------------------------------------------------------- 10.91s 2026-02-16 05:10:20.232908 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.03s 2026-02-16 05:10:20.232919 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.71s 2026-02-16 05:10:20.232929 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.02s 2026-02-16 05:10:20.232940 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.66s 2026-02-16 05:10:20.232951 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.42s 2026-02-16 05:10:20.232961 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 4.00s 2026-02-16 05:10:20.232971 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 3.01s 2026-02-16 05:10:20.232982 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.97s 2026-02-16 05:10:20.232992 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.88s 2026-02-16 05:10:20.233009 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.85s 2026-02-16 05:10:20.233053 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.82s 2026-02-16 05:10:20.233065 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.70s 2026-02-16 05:10:20.233075 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.64s 2026-02-16 05:10:20.233085 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.63s 2026-02-16 05:10:20.233096 | orchestrator | Manage taints ----------------------------------------------------------- 2.59s 2026-02-16 05:10:20.233118 | orchestrator | kubectl : Install required packages ------------------------------------- 2.55s 2026-02-16 05:10:20.751287 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.50s 2026-02-16 05:10:21.087835 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-16 05:10:21.087908 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-02-16 05:10:21.096306 | orchestrator | + set -e 2026-02-16 05:10:21.096394 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-16 05:10:21.096407 | orchestrator | ++ export INTERACTIVE=false 2026-02-16 05:10:21.096416 | orchestrator | ++ INTERACTIVE=false 2026-02-16 05:10:21.096422 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-16 05:10:21.096428 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-16 05:10:21.096467 | orchestrator | + osism apply openstackclient 2026-02-16 05:10:33.150378 | orchestrator | 2026-02-16 05:10:33 | INFO  | Task 41d0e123-075a-4c84-a284-33a49bb764e8 (openstackclient) was prepared for execution. 2026-02-16 05:10:33.150543 | orchestrator | 2026-02-16 05:10:33 | INFO  | It takes a moment until task 41d0e123-075a-4c84-a284-33a49bb764e8 (openstackclient) has been started and output is visible here. 2026-02-16 05:11:08.870631 | orchestrator | 2026-02-16 05:11:08.870742 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-16 05:11:08.870758 | orchestrator | 2026-02-16 05:11:08.870792 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-16 05:11:08.870805 | orchestrator | Monday 16 February 2026 05:10:39 +0000 (0:00:02.061) 0:00:02.061 ******* 2026-02-16 05:11:08.870817 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-16 05:11:08.870830 | orchestrator | 2026-02-16 05:11:08.870841 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-16 05:11:08.870852 | orchestrator | Monday 16 February 2026 05:10:41 +0000 (0:00:01.895) 0:00:03.957 ******* 2026-02-16 05:11:08.870863 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-16 05:11:08.870875 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-16 05:11:08.870889 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-16 05:11:08.870908 | orchestrator | 2026-02-16 05:11:08.870926 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-16 05:11:08.870944 | orchestrator | Monday 16 February 2026 05:10:43 +0000 (0:00:02.338) 0:00:06.295 ******* 2026-02-16 05:11:08.870962 | orchestrator | changed: [testbed-manager] 2026-02-16 05:11:08.870980 | orchestrator | 2026-02-16 05:11:08.870999 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-16 05:11:08.871018 | orchestrator | Monday 16 February 2026 05:10:46 +0000 (0:00:02.309) 0:00:08.604 ******* 2026-02-16 05:11:08.871036 | orchestrator | ok: [testbed-manager] 2026-02-16 05:11:08.871056 | orchestrator | 2026-02-16 05:11:08.871072 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-16 05:11:08.871083 | orchestrator | Monday 16 February 2026 05:10:48 +0000 (0:00:02.076) 0:00:10.681 ******* 2026-02-16 05:11:08.871094 | orchestrator | ok: [testbed-manager] 2026-02-16 05:11:08.871106 | orchestrator | 2026-02-16 05:11:08.871120 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-16 05:11:08.871134 | orchestrator | Monday 16 February 2026 05:10:50 +0000 (0:00:01.906) 0:00:12.588 ******* 2026-02-16 05:11:08.871146 | orchestrator | ok: [testbed-manager] 2026-02-16 05:11:08.871158 | orchestrator | 2026-02-16 05:11:08.871171 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-16 05:11:08.871184 | orchestrator | Monday 16 February 2026 05:10:51 +0000 (0:00:01.533) 0:00:14.121 ******* 2026-02-16 05:11:08.871197 | orchestrator | changed: [testbed-manager] 2026-02-16 05:11:08.871209 | orchestrator | 2026-02-16 05:11:08.871222 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-16 05:11:08.871234 | orchestrator | Monday 16 February 2026 05:11:03 +0000 (0:00:11.243) 0:00:25.365 ******* 2026-02-16 05:11:08.871247 | orchestrator | changed: [testbed-manager] 2026-02-16 05:11:08.871259 | orchestrator | 2026-02-16 05:11:08.871271 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-16 05:11:08.871284 | orchestrator | Monday 16 February 2026 05:11:05 +0000 (0:00:02.008) 0:00:27.373 ******* 2026-02-16 05:11:08.871296 | orchestrator | changed: [testbed-manager] 2026-02-16 05:11:08.871309 | orchestrator | 2026-02-16 05:11:08.871321 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-16 05:11:08.871334 | orchestrator | Monday 16 February 2026 05:11:06 +0000 (0:00:01.547) 0:00:28.920 ******* 2026-02-16 05:11:08.871346 | orchestrator | ok: [testbed-manager] 2026-02-16 05:11:08.871358 | orchestrator | 2026-02-16 05:11:08.871370 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 05:11:08.871383 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-16 05:11:08.871397 | orchestrator | 2026-02-16 05:11:08.871440 | orchestrator | 2026-02-16 05:11:08.871514 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 05:11:08.871529 | orchestrator | Monday 16 February 2026 05:11:08 +0000 (0:00:01.937) 0:00:30.858 ******* 2026-02-16 05:11:08.871542 | orchestrator | =============================================================================== 2026-02-16 05:11:08.871553 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 11.24s 2026-02-16 05:11:08.871564 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.34s 2026-02-16 05:11:08.871575 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.31s 2026-02-16 05:11:08.871586 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 2.08s 2026-02-16 05:11:08.871597 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.01s 2026-02-16 05:11:08.871607 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.94s 2026-02-16 05:11:08.871618 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.91s 2026-02-16 05:11:08.871629 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.90s 2026-02-16 05:11:08.871640 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.55s 2026-02-16 05:11:08.871651 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.53s 2026-02-16 05:11:09.180235 | orchestrator | + osism apply -a upgrade common 2026-02-16 05:11:11.256724 | orchestrator | 2026-02-16 05:11:11 | INFO  | Task 2fd35f99-2d6d-4515-ab01-76c767dc26c4 (common) was prepared for execution. 2026-02-16 05:11:11.256815 | orchestrator | 2026-02-16 05:11:11 | INFO  | It takes a moment until task 2fd35f99-2d6d-4515-ab01-76c767dc26c4 (common) has been started and output is visible here. 2026-02-16 05:11:26.832171 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-16 05:11:26.832297 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-16 05:11:26.832325 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-16 05:11:26.832335 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-16 05:11:26.832355 | orchestrator | 2026-02-16 05:11:26.832365 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-16 05:11:26.832375 | orchestrator | 2026-02-16 05:11:26.832385 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-16 05:11:26.832395 | orchestrator | Monday 16 February 2026 05:11:17 +0000 (0:00:01.662) 0:00:01.662 ******* 2026-02-16 05:11:26.832405 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 05:11:26.832417 | orchestrator | 2026-02-16 05:11:26.832427 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-16 05:11:26.832436 | orchestrator | Monday 16 February 2026 05:11:19 +0000 (0:00:02.179) 0:00:03.842 ******* 2026-02-16 05:11:26.832446 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 05:11:26.832455 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 05:11:26.832538 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 05:11:26.832549 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 05:11:26.832558 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 05:11:26.832567 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 05:11:26.832577 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 05:11:26.832609 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 05:11:26.832619 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 05:11:26.832628 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 05:11:26.832637 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 05:11:26.832647 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 05:11:26.832656 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 05:11:26.832665 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 05:11:26.832676 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 05:11:26.832687 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 05:11:26.832697 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 05:11:26.832708 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 05:11:26.832719 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 05:11:26.832730 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 05:11:26.832740 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 05:11:26.832751 | orchestrator | 2026-02-16 05:11:26.832762 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-16 05:11:26.832773 | orchestrator | Monday 16 February 2026 05:11:22 +0000 (0:00:02.658) 0:00:06.500 ******* 2026-02-16 05:11:26.832783 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 05:11:26.832796 | orchestrator | 2026-02-16 05:11:26.832807 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-16 05:11:26.832818 | orchestrator | Monday 16 February 2026 05:11:24 +0000 (0:00:02.123) 0:00:08.623 ******* 2026-02-16 05:11:26.832833 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:11:26.832878 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:11:26.832891 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:11:26.832903 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:11:26.832922 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:11:26.832934 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:11:26.833118 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:11:26.833145 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:26.833184 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:28.588906 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:28.589047 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:28.589075 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:28.589096 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:28.589133 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:28.589164 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:28.589186 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:28.589225 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:28.589237 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:28.589257 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:28.589267 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:28.589276 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:28.589287 | orchestrator | 2026-02-16 05:11:28.589299 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-16 05:11:28.589309 | orchestrator | Monday 16 February 2026 05:11:27 +0000 (0:00:03.464) 0:00:12.088 ******* 2026-02-16 05:11:28.589327 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:11:28.589339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:11:28.589350 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:28.589372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:29.493101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:11:29.493178 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:29.493199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:29.493206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:29.493213 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:11:29.493250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:29.493256 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:11:29.493262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:11:29.493268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:29.493289 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:11:29.493306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:11:29.493312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:29.493317 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:11:29.493322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:29.493328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:11:29.493337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:29.493342 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:11:29.493347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:29.493352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:29.493362 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:11:29.493367 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:11:29.493377 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:31.687815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:31.687916 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:11:31.687933 | orchestrator | 2026-02-16 05:11:31.687947 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-16 05:11:31.687960 | orchestrator | Monday 16 February 2026 05:11:29 +0000 (0:00:01.672) 0:00:13.761 ******* 2026-02-16 05:11:31.687973 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:11:31.688003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:11:31.688016 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:31.688028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:11:31.688063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:31.688095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:31.688107 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:31.688119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:31.688130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:11:31.688142 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:11:31.688153 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:11:31.688164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:11:31.688176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:31.688195 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:11:31.688206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:31.688224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:31.688245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:11:39.663661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:39.663749 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:11:39.663757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:39.663763 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:11:39.663780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:11:39.663787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:39.663808 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:39.663813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:39.663818 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:11:39.663822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:39.663827 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:11:39.663831 | orchestrator | 2026-02-16 05:11:39.663837 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-16 05:11:39.663843 | orchestrator | Monday 16 February 2026 05:11:31 +0000 (0:00:02.202) 0:00:15.964 ******* 2026-02-16 05:11:39.663847 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:11:39.663867 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:11:39.663872 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:11:39.663876 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:11:39.663880 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:11:39.663885 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:11:39.663899 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:11:39.663904 | orchestrator | 2026-02-16 05:11:39.663909 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-16 05:11:39.663913 | orchestrator | Monday 16 February 2026 05:11:32 +0000 (0:00:00.983) 0:00:16.948 ******* 2026-02-16 05:11:39.663917 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:11:39.663921 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:11:39.663926 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:11:39.663930 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:11:39.663934 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:11:39.663938 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:11:39.663942 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:11:39.663947 | orchestrator | 2026-02-16 05:11:39.663951 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-16 05:11:39.663955 | orchestrator | Monday 16 February 2026 05:11:33 +0000 (0:00:00.922) 0:00:17.870 ******* 2026-02-16 05:11:39.663959 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:11:39.663964 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:11:39.663972 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:11:39.663977 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:11:39.663981 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:11:39.663993 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:11:39.663998 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:11:39.664003 | orchestrator | 2026-02-16 05:11:39.664007 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-16 05:11:39.664012 | orchestrator | Monday 16 February 2026 05:11:34 +0000 (0:00:00.783) 0:00:18.654 ******* 2026-02-16 05:11:39.664016 | orchestrator | changed: [testbed-manager] 2026-02-16 05:11:39.664021 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:11:39.664025 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:11:39.664030 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:11:39.664034 | orchestrator | changed: [testbed-node-3] 2026-02-16 05:11:39.664039 | orchestrator | changed: [testbed-node-4] 2026-02-16 05:11:39.664043 | orchestrator | changed: [testbed-node-5] 2026-02-16 05:11:39.664048 | orchestrator | 2026-02-16 05:11:39.664056 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-16 05:11:39.664060 | orchestrator | Monday 16 February 2026 05:11:36 +0000 (0:00:01.894) 0:00:20.548 ******* 2026-02-16 05:11:39.664066 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:11:39.664071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:11:39.664076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:11:39.664081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:11:39.664091 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:11:40.532264 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:40.532368 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:11:40.532400 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:11:40.532413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:40.532423 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:40.532433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:40.532444 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:40.532594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:40.532611 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:40.532621 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:40.532630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:40.532641 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:40.532662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:40.532673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:40.532683 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:40.532711 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:54.293682 | orchestrator | 2026-02-16 05:11:54.293811 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-16 05:11:54.293839 | orchestrator | Monday 16 February 2026 05:11:40 +0000 (0:00:04.262) 0:00:24.811 ******* 2026-02-16 05:11:54.293858 | orchestrator | [WARNING]: Skipped 2026-02-16 05:11:54.293878 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-16 05:11:54.294150 | orchestrator | to this access issue: 2026-02-16 05:11:54.294175 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-16 05:11:54.294195 | orchestrator | directory 2026-02-16 05:11:54.294215 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-16 05:11:54.294236 | orchestrator | 2026-02-16 05:11:54.294256 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-16 05:11:54.294274 | orchestrator | Monday 16 February 2026 05:11:41 +0000 (0:00:01.338) 0:00:26.149 ******* 2026-02-16 05:11:54.294293 | orchestrator | [WARNING]: Skipped 2026-02-16 05:11:54.294313 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-16 05:11:54.294332 | orchestrator | to this access issue: 2026-02-16 05:11:54.294351 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-16 05:11:54.294369 | orchestrator | directory 2026-02-16 05:11:54.294387 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-16 05:11:54.294405 | orchestrator | 2026-02-16 05:11:54.294443 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-16 05:11:54.294461 | orchestrator | Monday 16 February 2026 05:11:42 +0000 (0:00:00.931) 0:00:27.081 ******* 2026-02-16 05:11:54.294560 | orchestrator | [WARNING]: Skipped 2026-02-16 05:11:54.294581 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-16 05:11:54.294600 | orchestrator | to this access issue: 2026-02-16 05:11:54.294618 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-16 05:11:54.294638 | orchestrator | directory 2026-02-16 05:11:54.294659 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-16 05:11:54.294680 | orchestrator | 2026-02-16 05:11:54.294699 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-16 05:11:54.294718 | orchestrator | Monday 16 February 2026 05:11:43 +0000 (0:00:00.901) 0:00:27.982 ******* 2026-02-16 05:11:54.294737 | orchestrator | [WARNING]: Skipped 2026-02-16 05:11:54.294757 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-16 05:11:54.294776 | orchestrator | to this access issue: 2026-02-16 05:11:54.294795 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-16 05:11:54.294815 | orchestrator | directory 2026-02-16 05:11:54.294834 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-16 05:11:54.294853 | orchestrator | 2026-02-16 05:11:54.294872 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-16 05:11:54.294891 | orchestrator | Monday 16 February 2026 05:11:44 +0000 (0:00:00.912) 0:00:28.895 ******* 2026-02-16 05:11:54.294910 | orchestrator | changed: [testbed-manager] 2026-02-16 05:11:54.294961 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:11:54.294982 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:11:54.295001 | orchestrator | changed: [testbed-node-3] 2026-02-16 05:11:54.295017 | orchestrator | changed: [testbed-node-4] 2026-02-16 05:11:54.295035 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:11:54.295054 | orchestrator | changed: [testbed-node-5] 2026-02-16 05:11:54.295072 | orchestrator | 2026-02-16 05:11:54.295090 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-16 05:11:54.295108 | orchestrator | Monday 16 February 2026 05:11:48 +0000 (0:00:03.455) 0:00:32.351 ******* 2026-02-16 05:11:54.295126 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 05:11:54.295148 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 05:11:54.295167 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 05:11:54.295185 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 05:11:54.295204 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 05:11:54.295224 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 05:11:54.295242 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 05:11:54.295262 | orchestrator | 2026-02-16 05:11:54.295282 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-16 05:11:54.295300 | orchestrator | Monday 16 February 2026 05:11:50 +0000 (0:00:02.394) 0:00:34.745 ******* 2026-02-16 05:11:54.295319 | orchestrator | ok: [testbed-manager] 2026-02-16 05:11:54.295338 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:11:54.295356 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:11:54.295375 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:11:54.295393 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:11:54.295410 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:11:54.295427 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:11:54.295445 | orchestrator | 2026-02-16 05:11:54.295463 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-16 05:11:54.295507 | orchestrator | Monday 16 February 2026 05:11:52 +0000 (0:00:01.997) 0:00:36.742 ******* 2026-02-16 05:11:54.295559 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:11:54.295585 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:54.295615 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:11:54.295650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:54.295673 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:54.295695 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:11:54.295714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:11:54.295734 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:11:54.295758 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:00.366652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:00.366814 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:00.366843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:00.366862 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:00.366873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:00.366885 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:00.366898 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:00.366930 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:00.366942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:00.366961 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:00.366999 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:00.367010 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:00.367021 | orchestrator | 2026-02-16 05:12:00.367032 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-16 05:12:00.367044 | orchestrator | Monday 16 February 2026 05:11:54 +0000 (0:00:01.969) 0:00:38.712 ******* 2026-02-16 05:12:00.367054 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 05:12:00.367064 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 05:12:00.367074 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 05:12:00.367083 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 05:12:00.367093 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 05:12:00.367110 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 05:12:00.367127 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 05:12:00.367149 | orchestrator | 2026-02-16 05:12:00.367176 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-16 05:12:00.367194 | orchestrator | Monday 16 February 2026 05:11:56 +0000 (0:00:02.197) 0:00:40.909 ******* 2026-02-16 05:12:00.367212 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 05:12:00.367230 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 05:12:00.367249 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 05:12:00.367269 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 05:12:00.367302 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 05:12:00.367317 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 05:12:00.367329 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 05:12:00.367349 | orchestrator | 2026-02-16 05:12:00.367361 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-16 05:12:00.367372 | orchestrator | Monday 16 February 2026 05:11:58 +0000 (0:00:02.381) 0:00:43.291 ******* 2026-02-16 05:12:00.367405 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:02.298386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:02.298559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:02.298578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:02.298605 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:02.298619 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:02.298642 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:02.298678 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:02.298716 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:02.298731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:02.298744 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:02.298755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:02.298766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:02.298782 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:02.298802 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:02.298835 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:04.367122 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:04.367197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:04.367204 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:04.367214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:04.367225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:04.367238 | orchestrator | 2026-02-16 05:12:04.367250 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-16 05:12:04.367260 | orchestrator | Monday 16 February 2026 05:12:02 +0000 (0:00:03.748) 0:00:47.039 ******* 2026-02-16 05:12:04.367293 | orchestrator | changed: [testbed-manager] => { 2026-02-16 05:12:04.367304 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:12:04.367313 | orchestrator | } 2026-02-16 05:12:04.367322 | orchestrator | changed: [testbed-node-0] => { 2026-02-16 05:12:04.367330 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:12:04.367339 | orchestrator | } 2026-02-16 05:12:04.367347 | orchestrator | changed: [testbed-node-1] => { 2026-02-16 05:12:04.367356 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:12:04.367365 | orchestrator | } 2026-02-16 05:12:04.367374 | orchestrator | changed: [testbed-node-2] => { 2026-02-16 05:12:04.367383 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:12:04.367391 | orchestrator | } 2026-02-16 05:12:04.367400 | orchestrator | changed: [testbed-node-3] => { 2026-02-16 05:12:04.367408 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:12:04.367417 | orchestrator | } 2026-02-16 05:12:04.367426 | orchestrator | changed: [testbed-node-4] => { 2026-02-16 05:12:04.367436 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:12:04.367442 | orchestrator | } 2026-02-16 05:12:04.367447 | orchestrator | changed: [testbed-node-5] => { 2026-02-16 05:12:04.367452 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:12:04.367457 | orchestrator | } 2026-02-16 05:12:04.367463 | orchestrator | 2026-02-16 05:12:04.367469 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-16 05:12:04.367474 | orchestrator | Monday 16 February 2026 05:12:03 +0000 (0:00:01.021) 0:00:48.061 ******* 2026-02-16 05:12:04.367534 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:04.367578 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:04.367586 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:04.367592 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:12:04.367598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:04.367603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:04.367617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:04.367622 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:12:04.367628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:04.367633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:04.367639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:04.367644 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:12:04.367655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:09.010746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:09.010877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:09.010920 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:12:09.010936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:09.010960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:09.010972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:09.010984 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-16 05:12:09.010996 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-16 05:12:09.011020 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:12:09.011035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:09.011066 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:09.011078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:09.011098 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:12:09.011109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:09.011122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:09.011134 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:09.011146 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:12:09.011156 | orchestrator | 2026-02-16 05:12:09.011169 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 05:12:09.011181 | orchestrator | Monday 16 February 2026 05:12:05 +0000 (0:00:02.134) 0:00:50.196 ******* 2026-02-16 05:12:09.011191 | orchestrator | 2026-02-16 05:12:09.011201 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 05:12:09.011211 | orchestrator | Monday 16 February 2026 05:12:05 +0000 (0:00:00.090) 0:00:50.286 ******* 2026-02-16 05:12:09.011220 | orchestrator | 2026-02-16 05:12:09.011229 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 05:12:09.011239 | orchestrator | Monday 16 February 2026 05:12:06 +0000 (0:00:00.082) 0:00:50.368 ******* 2026-02-16 05:12:09.011248 | orchestrator | 2026-02-16 05:12:09.011260 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 05:12:09.011270 | orchestrator | Monday 16 February 2026 05:12:06 +0000 (0:00:00.075) 0:00:50.444 ******* 2026-02-16 05:12:09.011280 | orchestrator | 2026-02-16 05:12:09.011290 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 05:12:09.011302 | orchestrator | Monday 16 February 2026 05:12:06 +0000 (0:00:00.079) 0:00:50.524 ******* 2026-02-16 05:12:09.011315 | orchestrator | 2026-02-16 05:12:09.011327 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 05:12:09.011338 | orchestrator | Monday 16 February 2026 05:12:06 +0000 (0:00:00.361) 0:00:50.886 ******* 2026-02-16 05:12:09.011348 | orchestrator | 2026-02-16 05:12:09.011357 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 05:12:09.011367 | orchestrator | Monday 16 February 2026 05:12:06 +0000 (0:00:00.076) 0:00:50.962 ******* 2026-02-16 05:12:09.011377 | orchestrator | 2026-02-16 05:12:09.011386 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-16 05:12:09.011396 | orchestrator | Monday 16 February 2026 05:12:06 +0000 (0:00:00.108) 0:00:51.070 ******* 2026-02-16 05:12:09.011411 | orchestrator | [WARNING]: Failure using method (v2_runner_on_failed) in callback plugin 2026-02-16 05:12:09.011421 | orchestrator | (): '4f07e0b7-8806-a7d8-7176-00000000000f' 2026-02-16 05:12:09.011474 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_mth4eslu/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_mth4eslu/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_mth4eslu/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-16 05:12:11.054564 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_vguqsjjw/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_vguqsjjw/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_vguqsjjw/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-16 05:12:11.054699 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_wpipbi9w/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_wpipbi9w/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_wpipbi9w/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-16 05:12:11.054721 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_89pt9pgl/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_89pt9pgl/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_89pt9pgl/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-16 05:12:11.054742 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_5smc80of/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_5smc80of/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_5smc80of/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-16 05:12:11.588467 | orchestrator | 2026-02-16 05:12:11 | INFO  | Task e736ad67-116a-4519-bc1d-d8b2926507e5 (common) was prepared for execution. 2026-02-16 05:12:11.588591 | orchestrator | 2026-02-16 05:12:11 | INFO  | It takes a moment until task e736ad67-116a-4519-bc1d-d8b2926507e5 (common) has been started and output is visible here. 2026-02-16 05:12:18.089585 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_lohrhsks/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_lohrhsks/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_lohrhsks/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-16 05:12:18.089738 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_yu0ywfmv/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_yu0ywfmv/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_yu0ywfmv/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-16 05:12:18.089769 | orchestrator | 2026-02-16 05:12:18.089782 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 05:12:18.089794 | orchestrator | testbed-manager : ok=18  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-16 05:12:18.089805 | orchestrator | testbed-node-0 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-16 05:12:18.089821 | orchestrator | testbed-node-1 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-16 05:12:18.089831 | orchestrator | testbed-node-2 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-16 05:12:18.089840 | orchestrator | testbed-node-3 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-16 05:12:18.089848 | orchestrator | testbed-node-4 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-16 05:12:18.089857 | orchestrator | testbed-node-5 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-16 05:12:18.089866 | orchestrator | 2026-02-16 05:12:18.089875 | orchestrator | 2026-02-16 05:12:18.089884 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 05:12:18.089893 | orchestrator | Monday 16 February 2026 05:12:11 +0000 (0:00:04.256) 0:00:55.327 ******* 2026-02-16 05:12:18.089902 | orchestrator | =============================================================================== 2026-02-16 05:12:18.089911 | orchestrator | common : Copying over config.json files for services -------------------- 4.26s 2026-02-16 05:12:18.089920 | orchestrator | common : Restart fluentd container -------------------------------------- 4.26s 2026-02-16 05:12:18.089928 | orchestrator | service-check-containers : common | Check containers -------------------- 3.75s 2026-02-16 05:12:18.089937 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.46s 2026-02-16 05:12:18.089945 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.46s 2026-02-16 05:12:18.089954 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.66s 2026-02-16 05:12:18.089963 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.39s 2026-02-16 05:12:18.089971 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.38s 2026-02-16 05:12:18.089980 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.20s 2026-02-16 05:12:18.089988 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.20s 2026-02-16 05:12:18.089997 | orchestrator | common : include_tasks -------------------------------------------------- 2.18s 2026-02-16 05:12:18.090005 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.14s 2026-02-16 05:12:18.090049 | orchestrator | common : include_tasks -------------------------------------------------- 2.12s 2026-02-16 05:12:18.090062 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.00s 2026-02-16 05:12:18.090073 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.97s 2026-02-16 05:12:18.090083 | orchestrator | common : Copying over kolla.target -------------------------------------- 1.89s 2026-02-16 05:12:18.090093 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.67s 2026-02-16 05:12:18.090110 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.34s 2026-02-16 05:12:18.090121 | orchestrator | service-check-containers : common | Notify handlers to restart containers --- 1.02s 2026-02-16 05:12:18.090131 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 0.98s 2026-02-16 05:12:18.090141 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-16 05:12:18.090152 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-16 05:12:18.090171 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-16 05:12:18.090180 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-16 05:12:18.090198 | orchestrator | 2026-02-16 05:12:18.090214 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-16 05:12:27.600961 | orchestrator | 2026-02-16 05:12:27.601048 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-16 05:12:27.601060 | orchestrator | Monday 16 February 2026 05:12:18 +0000 (0:00:02.121) 0:00:02.121 ******* 2026-02-16 05:12:27.601082 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 05:12:27.601090 | orchestrator | 2026-02-16 05:12:27.601094 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-16 05:12:27.601098 | orchestrator | Monday 16 February 2026 05:12:20 +0000 (0:00:02.167) 0:00:04.289 ******* 2026-02-16 05:12:27.601103 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 05:12:27.601107 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 05:12:27.601112 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 05:12:27.601118 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 05:12:27.601124 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 05:12:27.601130 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 05:12:27.601137 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 05:12:27.601143 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 05:12:27.601150 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 05:12:27.601156 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-16 05:12:27.601162 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 05:12:27.601168 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 05:12:27.601174 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 05:12:27.601178 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 05:12:27.601182 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 05:12:27.601186 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 05:12:27.601190 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-16 05:12:27.601193 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 05:12:27.601197 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 05:12:27.601201 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 05:12:27.601220 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-16 05:12:27.601224 | orchestrator | 2026-02-16 05:12:27.601228 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-16 05:12:27.601232 | orchestrator | Monday 16 February 2026 05:12:22 +0000 (0:00:02.609) 0:00:06.899 ******* 2026-02-16 05:12:27.601236 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 05:12:27.601241 | orchestrator | 2026-02-16 05:12:27.601244 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-16 05:12:27.601248 | orchestrator | Monday 16 February 2026 05:12:25 +0000 (0:00:02.185) 0:00:09.085 ******* 2026-02-16 05:12:27.601254 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:27.601261 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:27.601279 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:27.601284 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:27.601288 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:27.601292 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:27.601300 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:27.601304 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:27.601309 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:27.601318 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:29.327584 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:29.327678 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:29.327691 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:29.327720 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:29.327732 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:29.327741 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:29.327751 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:29.327783 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:29.327793 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:29.327802 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:29.327812 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:29.327827 | orchestrator | 2026-02-16 05:12:29.327839 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-16 05:12:29.327848 | orchestrator | Monday 16 February 2026 05:12:28 +0000 (0:00:03.511) 0:00:12.596 ******* 2026-02-16 05:12:29.327859 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:29.327869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:29.327880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:29.327890 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:29.327911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:30.184161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:30.184251 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:12:30.184260 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:30.184267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:30.184272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:30.184277 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:12:30.184281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:30.184287 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:30.184292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:30.184296 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:12:30.184312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:30.184320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:30.184355 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:12:30.184361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:30.184365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:30.184370 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:12:30.184374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:30.184379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:30.184383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:30.184388 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:12:30.184398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:32.443826 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:32.443934 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:12:32.443949 | orchestrator | 2026-02-16 05:12:32.443963 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-16 05:12:32.443976 | orchestrator | Monday 16 February 2026 05:12:30 +0000 (0:00:01.610) 0:00:14.206 ******* 2026-02-16 05:12:32.443989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:32.444003 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:32.444015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:32.444026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:32.444039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:32.444075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:32.444105 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:32.444118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:32.444129 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:32.444140 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:12:32.444151 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:12:32.444163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:32.444174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:32.444185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:32.444209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:32.444229 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:12:32.444250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:40.015826 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:12:40.015927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:40.015940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:40.015948 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:12:40.015955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:12:40.015962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:40.015970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:40.015997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:40.016017 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:12:40.016025 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:40.016032 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:12:40.016039 | orchestrator | 2026-02-16 05:12:40.016047 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-16 05:12:40.016056 | orchestrator | Monday 16 February 2026 05:12:32 +0000 (0:00:02.259) 0:00:16.465 ******* 2026-02-16 05:12:40.016063 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:12:40.016070 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:12:40.016091 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:12:40.016098 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:12:40.016104 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:12:40.016111 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:12:40.016118 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:12:40.016125 | orchestrator | 2026-02-16 05:12:40.016132 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-16 05:12:40.016139 | orchestrator | Monday 16 February 2026 05:12:33 +0000 (0:00:01.030) 0:00:17.496 ******* 2026-02-16 05:12:40.016146 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:12:40.016153 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:12:40.016160 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:12:40.016166 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:12:40.016173 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:12:40.016180 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:12:40.016187 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:12:40.016193 | orchestrator | 2026-02-16 05:12:40.016201 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-16 05:12:40.016208 | orchestrator | Monday 16 February 2026 05:12:34 +0000 (0:00:00.937) 0:00:18.434 ******* 2026-02-16 05:12:40.016215 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:12:40.016222 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:12:40.016229 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:12:40.016236 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:12:40.016243 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:12:40.016250 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:12:40.016257 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:12:40.016264 | orchestrator | 2026-02-16 05:12:40.016271 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-16 05:12:40.016278 | orchestrator | Monday 16 February 2026 05:12:35 +0000 (0:00:00.762) 0:00:19.196 ******* 2026-02-16 05:12:40.016285 | orchestrator | ok: [testbed-manager] 2026-02-16 05:12:40.016293 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:12:40.016300 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:12:40.016307 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:12:40.016314 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:12:40.016321 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:12:40.016328 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:12:40.016340 | orchestrator | 2026-02-16 05:12:40.016348 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-16 05:12:40.016355 | orchestrator | Monday 16 February 2026 05:12:37 +0000 (0:00:02.015) 0:00:21.211 ******* 2026-02-16 05:12:40.016362 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:40.016371 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:40.016379 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:40.016390 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:40.016404 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:40.956187 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:40.956317 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:40.956379 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:40.956399 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:40.956412 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:40.956439 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:40.956472 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:40.956486 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:40.956537 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:40.956560 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:40.956572 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:40.956585 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:40.956596 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:40.956608 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:40.956619 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:40.956646 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:53.712979 | orchestrator | 2026-02-16 05:12:53.713069 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-16 05:12:53.713082 | orchestrator | Monday 16 February 2026 05:12:40 +0000 (0:00:03.770) 0:00:24.982 ******* 2026-02-16 05:12:53.713112 | orchestrator | [WARNING]: Skipped 2026-02-16 05:12:53.713122 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-16 05:12:53.713131 | orchestrator | to this access issue: 2026-02-16 05:12:53.713139 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-16 05:12:53.713147 | orchestrator | directory 2026-02-16 05:12:53.713155 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-16 05:12:53.713164 | orchestrator | 2026-02-16 05:12:53.713173 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-16 05:12:53.713181 | orchestrator | Monday 16 February 2026 05:12:42 +0000 (0:00:01.286) 0:00:26.268 ******* 2026-02-16 05:12:53.713189 | orchestrator | [WARNING]: Skipped 2026-02-16 05:12:53.713197 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-16 05:12:53.713205 | orchestrator | to this access issue: 2026-02-16 05:12:53.713212 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-16 05:12:53.713220 | orchestrator | directory 2026-02-16 05:12:53.713228 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-16 05:12:53.713236 | orchestrator | 2026-02-16 05:12:53.713243 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-16 05:12:53.713251 | orchestrator | Monday 16 February 2026 05:12:43 +0000 (0:00:00.932) 0:00:27.201 ******* 2026-02-16 05:12:53.713259 | orchestrator | [WARNING]: Skipped 2026-02-16 05:12:53.713267 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-16 05:12:53.713274 | orchestrator | to this access issue: 2026-02-16 05:12:53.713282 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-16 05:12:53.713290 | orchestrator | directory 2026-02-16 05:12:53.713298 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-16 05:12:53.713306 | orchestrator | 2026-02-16 05:12:53.713313 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-16 05:12:53.713321 | orchestrator | Monday 16 February 2026 05:12:44 +0000 (0:00:00.952) 0:00:28.154 ******* 2026-02-16 05:12:53.713329 | orchestrator | [WARNING]: Skipped 2026-02-16 05:12:53.713337 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-16 05:12:53.713344 | orchestrator | to this access issue: 2026-02-16 05:12:53.713352 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-16 05:12:53.713360 | orchestrator | directory 2026-02-16 05:12:53.713368 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-16 05:12:53.713375 | orchestrator | 2026-02-16 05:12:53.713383 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-16 05:12:53.713391 | orchestrator | Monday 16 February 2026 05:12:45 +0000 (0:00:00.933) 0:00:29.087 ******* 2026-02-16 05:12:53.713398 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:12:53.713406 | orchestrator | ok: [testbed-manager] 2026-02-16 05:12:53.713414 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:12:53.713422 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:12:53.713430 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:12:53.713438 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:12:53.713445 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:12:53.713453 | orchestrator | 2026-02-16 05:12:53.713461 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-16 05:12:53.713468 | orchestrator | Monday 16 February 2026 05:12:47 +0000 (0:00:02.798) 0:00:31.886 ******* 2026-02-16 05:12:53.713476 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 05:12:53.713485 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 05:12:53.713493 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 05:12:53.713536 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 05:12:53.713552 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 05:12:53.713561 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 05:12:53.713570 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-16 05:12:53.713579 | orchestrator | 2026-02-16 05:12:53.713588 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-16 05:12:53.713597 | orchestrator | Monday 16 February 2026 05:12:50 +0000 (0:00:02.226) 0:00:34.112 ******* 2026-02-16 05:12:53.713606 | orchestrator | ok: [testbed-manager] 2026-02-16 05:12:53.713615 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:12:53.713624 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:12:53.713632 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:12:53.713641 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:12:53.713650 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:12:53.713659 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:12:53.713668 | orchestrator | 2026-02-16 05:12:53.713677 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-16 05:12:53.713686 | orchestrator | Monday 16 February 2026 05:12:51 +0000 (0:00:01.848) 0:00:35.961 ******* 2026-02-16 05:12:53.713714 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:53.713728 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:53.713738 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:53.713748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:53.713758 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:53.713779 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:12:53.713790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:12:53.713799 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:12:53.713816 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:13:00.493749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:13:00.493852 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:13:00.493864 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:13:00.493886 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:13:00.493900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:13:00.493905 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:13:00.493911 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:13:00.493927 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:13:00.493931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:13:00.493935 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:13:00.493944 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:13:00.493948 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:13:00.493952 | orchestrator | 2026-02-16 05:13:00.493958 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-16 05:13:00.493966 | orchestrator | Monday 16 February 2026 05:12:53 +0000 (0:00:01.920) 0:00:37.882 ******* 2026-02-16 05:13:00.493970 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 05:13:00.493975 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 05:13:00.493981 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 05:13:00.493986 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 05:13:00.493992 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 05:13:00.493998 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 05:13:00.494004 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-16 05:13:00.494010 | orchestrator | 2026-02-16 05:13:00.494061 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-16 05:13:00.494067 | orchestrator | Monday 16 February 2026 05:12:55 +0000 (0:00:02.106) 0:00:39.989 ******* 2026-02-16 05:13:00.494073 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 05:13:00.494078 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 05:13:00.494083 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 05:13:00.494089 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 05:13:00.494095 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 05:13:00.494101 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 05:13:00.494108 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-16 05:13:00.494113 | orchestrator | 2026-02-16 05:13:00.494117 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-16 05:13:00.494121 | orchestrator | Monday 16 February 2026 05:12:58 +0000 (0:00:02.146) 0:00:42.135 ******* 2026-02-16 05:13:00.494130 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:13:01.458834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:13:01.458965 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:13:01.458982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:13:01.459011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:13:01.459023 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:13:01.459034 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-16 05:13:01.459046 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:13:01.459080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:13:01.459103 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:13:01.459114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:13:01.459131 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:13:01.459143 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:13:01.459158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:13:01.459169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:13:01.459196 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:13:03.122314 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:13:03.122405 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:13:03.122419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:13:03.122446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:13:03.122457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:13:03.122467 | orchestrator | 2026-02-16 05:13:03.122478 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-16 05:13:03.122489 | orchestrator | Monday 16 February 2026 05:13:01 +0000 (0:00:03.352) 0:00:45.488 ******* 2026-02-16 05:13:03.122568 | orchestrator | changed: [testbed-manager] => { 2026-02-16 05:13:03.122579 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:13:03.122588 | orchestrator | } 2026-02-16 05:13:03.122597 | orchestrator | changed: [testbed-node-0] => { 2026-02-16 05:13:03.122606 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:13:03.122615 | orchestrator | } 2026-02-16 05:13:03.122624 | orchestrator | changed: [testbed-node-1] => { 2026-02-16 05:13:03.122633 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:13:03.122642 | orchestrator | } 2026-02-16 05:13:03.122651 | orchestrator | changed: [testbed-node-2] => { 2026-02-16 05:13:03.122660 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:13:03.122669 | orchestrator | } 2026-02-16 05:13:03.122698 | orchestrator | changed: [testbed-node-3] => { 2026-02-16 05:13:03.122707 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:13:03.122716 | orchestrator | } 2026-02-16 05:13:03.122725 | orchestrator | changed: [testbed-node-4] => { 2026-02-16 05:13:03.122733 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:13:03.122742 | orchestrator | } 2026-02-16 05:13:03.122752 | orchestrator | changed: [testbed-node-5] => { 2026-02-16 05:13:03.122761 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:13:03.122770 | orchestrator | } 2026-02-16 05:13:03.122781 | orchestrator | 2026-02-16 05:13:03.122791 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-16 05:13:03.122801 | orchestrator | Monday 16 February 2026 05:13:02 +0000 (0:00:01.042) 0:00:46.530 ******* 2026-02-16 05:13:03.122813 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:13:03.122847 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:13:03.122858 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:13:03.122868 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:13:03.122879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:13:03.122897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:13:03.122909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:13:03.122928 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:13:03.122939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:13:03.122951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:13:03.122962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:13:03.122978 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:13:05.537915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:13:05.537988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:13:05.538006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:13:05.538013 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:13:05.538051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:13:05.538069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:13:05.538073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:13:05.538078 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-16 05:13:05.538082 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-16 05:13:05.538091 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:13:05.538104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:13:05.538108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:13:05.538113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:13:05.538117 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:13:05.538123 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-16 05:13:05.538131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:13:05.538135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:13:05.538139 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:13:05.538143 | orchestrator | 2026-02-16 05:13:05.538148 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 05:13:05.538152 | orchestrator | Monday 16 February 2026 05:13:04 +0000 (0:00:02.126) 0:00:48.657 ******* 2026-02-16 05:13:05.538156 | orchestrator | 2026-02-16 05:13:05.538160 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 05:13:05.538163 | orchestrator | Monday 16 February 2026 05:13:04 +0000 (0:00:00.093) 0:00:48.751 ******* 2026-02-16 05:13:05.538167 | orchestrator | 2026-02-16 05:13:05.538171 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 05:13:05.538175 | orchestrator | Monday 16 February 2026 05:13:04 +0000 (0:00:00.081) 0:00:48.832 ******* 2026-02-16 05:13:05.538178 | orchestrator | 2026-02-16 05:13:05.538182 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 05:13:05.538186 | orchestrator | Monday 16 February 2026 05:13:04 +0000 (0:00:00.095) 0:00:48.928 ******* 2026-02-16 05:13:05.538190 | orchestrator | 2026-02-16 05:13:05.538193 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 05:13:05.538197 | orchestrator | Monday 16 February 2026 05:13:04 +0000 (0:00:00.075) 0:00:49.004 ******* 2026-02-16 05:13:05.538201 | orchestrator | 2026-02-16 05:13:05.538204 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 05:13:05.538208 | orchestrator | Monday 16 February 2026 05:13:05 +0000 (0:00:00.354) 0:00:49.358 ******* 2026-02-16 05:13:05.538212 | orchestrator | 2026-02-16 05:13:05.538216 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-16 05:13:05.538220 | orchestrator | Monday 16 February 2026 05:13:05 +0000 (0:00:00.075) 0:00:49.434 ******* 2026-02-16 05:13:05.538223 | orchestrator | 2026-02-16 05:13:05.538227 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-16 05:13:05.538234 | orchestrator | Monday 16 February 2026 05:13:05 +0000 (0:00:00.107) 0:00:49.541 ******* 2026-02-16 05:14:29.951415 | orchestrator | changed: [testbed-manager] 2026-02-16 05:14:29.951588 | orchestrator | changed: [testbed-node-4] 2026-02-16 05:14:29.951609 | orchestrator | changed: [testbed-node-3] 2026-02-16 05:14:29.951627 | orchestrator | changed: [testbed-node-5] 2026-02-16 05:14:29.951645 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:14:29.951662 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:14:29.951680 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:14:29.951698 | orchestrator | 2026-02-16 05:14:29.951716 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-16 05:14:29.951735 | orchestrator | Monday 16 February 2026 05:13:40 +0000 (0:00:34.525) 0:01:24.067 ******* 2026-02-16 05:14:29.951752 | orchestrator | changed: [testbed-manager] 2026-02-16 05:14:29.951770 | orchestrator | changed: [testbed-node-4] 2026-02-16 05:14:29.951808 | orchestrator | changed: [testbed-node-3] 2026-02-16 05:14:29.951826 | orchestrator | changed: [testbed-node-5] 2026-02-16 05:14:29.951843 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:14:29.951861 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:14:29.951878 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:14:29.951896 | orchestrator | 2026-02-16 05:14:29.951914 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-16 05:14:29.951934 | orchestrator | Monday 16 February 2026 05:14:15 +0000 (0:00:35.558) 0:01:59.625 ******* 2026-02-16 05:14:29.951954 | orchestrator | ok: [testbed-manager] 2026-02-16 05:14:29.951974 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:14:29.952011 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:14:29.952031 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:14:29.952048 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:14:29.952066 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:14:29.952084 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:14:29.952102 | orchestrator | 2026-02-16 05:14:29.952120 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-16 05:14:29.952138 | orchestrator | Monday 16 February 2026 05:14:17 +0000 (0:00:02.024) 0:02:01.650 ******* 2026-02-16 05:14:29.952156 | orchestrator | changed: [testbed-manager] 2026-02-16 05:14:29.952174 | orchestrator | changed: [testbed-node-3] 2026-02-16 05:14:29.952192 | orchestrator | changed: [testbed-node-4] 2026-02-16 05:14:29.952210 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:14:29.952228 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:14:29.952246 | orchestrator | changed: [testbed-node-5] 2026-02-16 05:14:29.952263 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:14:29.952281 | orchestrator | 2026-02-16 05:14:29.952310 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 05:14:29.952328 | orchestrator | testbed-manager : ok=22  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 05:14:29.952347 | orchestrator | testbed-node-0 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 05:14:29.952365 | orchestrator | testbed-node-1 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 05:14:29.952383 | orchestrator | testbed-node-2 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 05:14:29.952401 | orchestrator | testbed-node-3 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 05:14:29.952420 | orchestrator | testbed-node-4 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 05:14:29.952438 | orchestrator | testbed-node-5 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 05:14:29.952455 | orchestrator | 2026-02-16 05:14:29.952473 | orchestrator | 2026-02-16 05:14:29.952491 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 05:14:29.952509 | orchestrator | Monday 16 February 2026 05:14:29 +0000 (0:00:11.712) 0:02:13.363 ******* 2026-02-16 05:14:29.952550 | orchestrator | =============================================================================== 2026-02-16 05:14:29.952568 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 35.56s 2026-02-16 05:14:29.952586 | orchestrator | common : Restart fluentd container ------------------------------------- 34.53s 2026-02-16 05:14:29.952604 | orchestrator | common : Restart cron container ---------------------------------------- 11.71s 2026-02-16 05:14:29.952622 | orchestrator | common : Copying over config.json files for services -------------------- 3.77s 2026-02-16 05:14:29.952640 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.51s 2026-02-16 05:14:29.952667 | orchestrator | service-check-containers : common | Check containers -------------------- 3.35s 2026-02-16 05:14:29.952685 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.80s 2026-02-16 05:14:29.952703 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.61s 2026-02-16 05:14:29.952721 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.26s 2026-02-16 05:14:29.952739 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.23s 2026-02-16 05:14:29.952775 | orchestrator | common : include_tasks -------------------------------------------------- 2.19s 2026-02-16 05:14:29.952792 | orchestrator | common : include_tasks -------------------------------------------------- 2.17s 2026-02-16 05:14:29.952810 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.15s 2026-02-16 05:14:29.952828 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.13s 2026-02-16 05:14:29.952859 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.11s 2026-02-16 05:14:29.952877 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.02s 2026-02-16 05:14:29.952894 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.02s 2026-02-16 05:14:29.952912 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.92s 2026-02-16 05:14:29.952930 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.85s 2026-02-16 05:14:29.952948 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.61s 2026-02-16 05:14:30.338110 | orchestrator | + osism apply -a upgrade loadbalancer 2026-02-16 05:14:32.667241 | orchestrator | 2026-02-16 05:14:32 | INFO  | Task 39bcc7d4-3b58-48f1-96c0-e15d448a09ed (loadbalancer) was prepared for execution. 2026-02-16 05:14:32.667361 | orchestrator | 2026-02-16 05:14:32 | INFO  | It takes a moment until task 39bcc7d4-3b58-48f1-96c0-e15d448a09ed (loadbalancer) has been started and output is visible here. 2026-02-16 05:15:08.152780 | orchestrator | 2026-02-16 05:15:08.152912 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 05:15:08.152938 | orchestrator | 2026-02-16 05:15:08.152955 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 05:15:08.152971 | orchestrator | Monday 16 February 2026 05:14:38 +0000 (0:00:01.740) 0:00:01.741 ******* 2026-02-16 05:15:08.152988 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:15:08.153006 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:15:08.153023 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:15:08.153040 | orchestrator | 2026-02-16 05:15:08.153055 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 05:15:08.153073 | orchestrator | Monday 16 February 2026 05:14:41 +0000 (0:00:02.129) 0:00:03.870 ******* 2026-02-16 05:15:08.153089 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-16 05:15:08.153105 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-16 05:15:08.153144 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-16 05:15:08.153161 | orchestrator | 2026-02-16 05:15:08.153187 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-16 05:15:08.153205 | orchestrator | 2026-02-16 05:15:08.153222 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-16 05:15:08.153238 | orchestrator | Monday 16 February 2026 05:14:43 +0000 (0:00:02.501) 0:00:06.371 ******* 2026-02-16 05:15:08.153256 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:15:08.153273 | orchestrator | 2026-02-16 05:15:08.153291 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-02-16 05:15:08.153307 | orchestrator | Monday 16 February 2026 05:14:45 +0000 (0:00:02.229) 0:00:08.601 ******* 2026-02-16 05:15:08.153319 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:15:08.153352 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:15:08.153362 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:15:08.153372 | orchestrator | 2026-02-16 05:15:08.153381 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-02-16 05:15:08.153391 | orchestrator | Monday 16 February 2026 05:14:47 +0000 (0:00:02.138) 0:00:10.739 ******* 2026-02-16 05:15:08.153400 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:15:08.153409 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:15:08.153419 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:15:08.153428 | orchestrator | 2026-02-16 05:15:08.153437 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-16 05:15:08.153446 | orchestrator | Monday 16 February 2026 05:14:49 +0000 (0:00:02.055) 0:00:12.795 ******* 2026-02-16 05:15:08.153456 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:15:08.153466 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:15:08.153475 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:15:08.153485 | orchestrator | 2026-02-16 05:15:08.153494 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-16 05:15:08.153504 | orchestrator | Monday 16 February 2026 05:14:51 +0000 (0:00:01.640) 0:00:14.436 ******* 2026-02-16 05:15:08.153513 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:15:08.153523 | orchestrator | 2026-02-16 05:15:08.153532 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-16 05:15:08.153568 | orchestrator | Monday 16 February 2026 05:14:53 +0000 (0:00:01.872) 0:00:16.308 ******* 2026-02-16 05:15:08.153578 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:15:08.153587 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:15:08.153597 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:15:08.153606 | orchestrator | 2026-02-16 05:15:08.153616 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-16 05:15:08.153625 | orchestrator | Monday 16 February 2026 05:14:55 +0000 (0:00:01.684) 0:00:17.993 ******* 2026-02-16 05:15:08.153635 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-16 05:15:08.153644 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-16 05:15:08.153654 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-16 05:15:08.153663 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-16 05:15:08.153672 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-16 05:15:08.153682 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-16 05:15:08.153691 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-16 05:15:08.153702 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-16 05:15:08.153711 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-16 05:15:08.153726 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-16 05:15:08.153743 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-16 05:15:08.153761 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-16 05:15:08.153779 | orchestrator | 2026-02-16 05:15:08.153804 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-16 05:15:08.153825 | orchestrator | Monday 16 February 2026 05:14:59 +0000 (0:00:04.115) 0:00:22.108 ******* 2026-02-16 05:15:08.153843 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-16 05:15:08.153861 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-16 05:15:08.153878 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-16 05:15:08.153894 | orchestrator | 2026-02-16 05:15:08.153909 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-16 05:15:08.153965 | orchestrator | Monday 16 February 2026 05:15:01 +0000 (0:00:02.014) 0:00:24.123 ******* 2026-02-16 05:15:08.153985 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-16 05:15:08.154003 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-16 05:15:08.154099 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-16 05:15:08.154120 | orchestrator | 2026-02-16 05:15:08.154137 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-16 05:15:08.154153 | orchestrator | Monday 16 February 2026 05:15:03 +0000 (0:00:02.187) 0:00:26.310 ******* 2026-02-16 05:15:08.154170 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-16 05:15:08.154187 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:15:08.154205 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-16 05:15:08.154217 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:15:08.154227 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-16 05:15:08.154236 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:15:08.154246 | orchestrator | 2026-02-16 05:15:08.154264 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-16 05:15:08.154273 | orchestrator | Monday 16 February 2026 05:15:05 +0000 (0:00:01.894) 0:00:28.205 ******* 2026-02-16 05:15:08.154287 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-16 05:15:08.154305 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-16 05:15:08.154316 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-16 05:15:08.154326 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 05:15:08.154346 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 05:15:08.154367 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 05:15:19.058840 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 05:15:19.058962 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 05:15:19.058976 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 05:15:19.058986 | orchestrator | 2026-02-16 05:15:19.058997 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-16 05:15:19.059007 | orchestrator | Monday 16 February 2026 05:15:08 +0000 (0:00:02.746) 0:00:30.952 ******* 2026-02-16 05:15:19.059016 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:15:19.059025 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:15:19.059033 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:15:19.059041 | orchestrator | 2026-02-16 05:15:19.059049 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-16 05:15:19.059057 | orchestrator | Monday 16 February 2026 05:15:10 +0000 (0:00:01.979) 0:00:32.931 ******* 2026-02-16 05:15:19.059065 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-02-16 05:15:19.059075 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-02-16 05:15:19.059083 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-02-16 05:15:19.059091 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-02-16 05:15:19.059098 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-02-16 05:15:19.059106 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-02-16 05:15:19.059131 | orchestrator | 2026-02-16 05:15:19.059140 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-16 05:15:19.059148 | orchestrator | Monday 16 February 2026 05:15:12 +0000 (0:00:02.854) 0:00:35.785 ******* 2026-02-16 05:15:19.059156 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:15:19.059163 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:15:19.059171 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:15:19.059179 | orchestrator | 2026-02-16 05:15:19.059187 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-16 05:15:19.059195 | orchestrator | Monday 16 February 2026 05:15:15 +0000 (0:00:02.233) 0:00:38.019 ******* 2026-02-16 05:15:19.059203 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:15:19.059211 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:15:19.059219 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:15:19.059227 | orchestrator | 2026-02-16 05:15:19.059235 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-16 05:15:19.059243 | orchestrator | Monday 16 February 2026 05:15:17 +0000 (0:00:02.154) 0:00:40.173 ******* 2026-02-16 05:15:19.059251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-16 05:15:19.059284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 05:15:19.059298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 05:15:19.059314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__33053f83c66eb129bb95a0f92f9785717a549dc0', '__omit_place_holder__33053f83c66eb129bb95a0f92f9785717a549dc0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-16 05:15:19.059328 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:15:19.059343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-16 05:15:19.059366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 05:15:19.059380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 05:15:19.059393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__33053f83c66eb129bb95a0f92f9785717a549dc0', '__omit_place_holder__33053f83c66eb129bb95a0f92f9785717a549dc0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-16 05:15:19.059406 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:15:19.059428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-16 05:15:23.144435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 05:15:23.144613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 05:15:23.144686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__33053f83c66eb129bb95a0f92f9785717a549dc0', '__omit_place_holder__33053f83c66eb129bb95a0f92f9785717a549dc0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-16 05:15:23.144710 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:15:23.144733 | orchestrator | 2026-02-16 05:15:23.144755 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-16 05:15:23.144775 | orchestrator | Monday 16 February 2026 05:15:19 +0000 (0:00:01.681) 0:00:41.855 ******* 2026-02-16 05:15:23.144796 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-16 05:15:23.144836 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-16 05:15:23.144861 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-16 05:15:23.144894 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 05:15:23.144917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 05:15:23.144928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__33053f83c66eb129bb95a0f92f9785717a549dc0', '__omit_place_holder__33053f83c66eb129bb95a0f92f9785717a549dc0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-16 05:15:23.144940 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 05:15:23.144951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 05:15:23.144970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__33053f83c66eb129bb95a0f92f9785717a549dc0', '__omit_place_holder__33053f83c66eb129bb95a0f92f9785717a549dc0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-16 05:15:23.145002 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 05:15:37.026864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 05:15:37.027011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__33053f83c66eb129bb95a0f92f9785717a549dc0', '__omit_place_holder__33053f83c66eb129bb95a0f92f9785717a549dc0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-16 05:15:37.027028 | orchestrator | 2026-02-16 05:15:37.027042 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-16 05:15:37.027053 | orchestrator | Monday 16 February 2026 05:15:23 +0000 (0:00:04.093) 0:00:45.948 ******* 2026-02-16 05:15:37.027064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-16 05:15:37.027076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-16 05:15:37.027101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-16 05:15:37.027111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 05:15:37.027148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 05:15:37.027159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 05:15:37.027169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 05:15:37.027179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 05:15:37.027189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 05:15:37.027199 | orchestrator | 2026-02-16 05:15:37.027209 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-16 05:15:37.027219 | orchestrator | Monday 16 February 2026 05:15:27 +0000 (0:00:04.662) 0:00:50.611 ******* 2026-02-16 05:15:37.027228 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-16 05:15:37.027240 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-16 05:15:37.027254 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-16 05:15:37.027264 | orchestrator | 2026-02-16 05:15:37.027274 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-16 05:15:37.027284 | orchestrator | Monday 16 February 2026 05:15:30 +0000 (0:00:02.771) 0:00:53.382 ******* 2026-02-16 05:15:37.027300 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-16 05:15:37.027309 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-16 05:15:37.027319 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-16 05:15:37.027328 | orchestrator | 2026-02-16 05:15:37.027338 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-16 05:15:37.027347 | orchestrator | Monday 16 February 2026 05:15:35 +0000 (0:00:04.558) 0:00:57.941 ******* 2026-02-16 05:15:37.027357 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:15:37.027368 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:15:37.027384 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:15:57.306084 | orchestrator | 2026-02-16 05:15:57.306176 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-16 05:15:57.306187 | orchestrator | Monday 16 February 2026 05:15:37 +0000 (0:00:01.886) 0:00:59.828 ******* 2026-02-16 05:15:57.306195 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-16 05:15:57.306202 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-16 05:15:57.306208 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-16 05:15:57.306214 | orchestrator | 2026-02-16 05:15:57.306221 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-16 05:15:57.306228 | orchestrator | Monday 16 February 2026 05:15:40 +0000 (0:00:03.045) 0:01:02.873 ******* 2026-02-16 05:15:57.306234 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-16 05:15:57.306241 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-16 05:15:57.306247 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-16 05:15:57.306253 | orchestrator | 2026-02-16 05:15:57.306260 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-16 05:15:57.306266 | orchestrator | Monday 16 February 2026 05:15:42 +0000 (0:00:02.772) 0:01:05.646 ******* 2026-02-16 05:15:57.306272 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:15:57.306279 | orchestrator | 2026-02-16 05:15:57.306285 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-16 05:15:57.306292 | orchestrator | Monday 16 February 2026 05:15:44 +0000 (0:00:01.902) 0:01:07.548 ******* 2026-02-16 05:15:57.306299 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-02-16 05:15:57.306305 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-02-16 05:15:57.306312 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-02-16 05:15:57.306318 | orchestrator | 2026-02-16 05:15:57.306324 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-16 05:15:57.306330 | orchestrator | Monday 16 February 2026 05:15:47 +0000 (0:00:02.609) 0:01:10.157 ******* 2026-02-16 05:15:57.306337 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-16 05:15:57.306343 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-16 05:15:57.306349 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-16 05:15:57.306355 | orchestrator | 2026-02-16 05:15:57.306361 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-02-16 05:15:57.306368 | orchestrator | Monday 16 February 2026 05:15:49 +0000 (0:00:02.573) 0:01:12.731 ******* 2026-02-16 05:15:57.306374 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:15:57.306381 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:15:57.306403 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:15:57.306410 | orchestrator | 2026-02-16 05:15:57.306416 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-02-16 05:15:57.306423 | orchestrator | Monday 16 February 2026 05:15:51 +0000 (0:00:01.383) 0:01:14.114 ******* 2026-02-16 05:15:57.306429 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:15:57.306435 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:15:57.306441 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:15:57.306459 | orchestrator | 2026-02-16 05:15:57.306466 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-16 05:15:57.306472 | orchestrator | Monday 16 February 2026 05:15:53 +0000 (0:00:01.883) 0:01:15.998 ******* 2026-02-16 05:15:57.306493 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-16 05:15:57.306503 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-16 05:15:57.306523 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-16 05:15:57.306530 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 05:15:57.306537 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 05:15:57.306570 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 05:15:57.306579 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 05:15:57.306589 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 05:15:57.306600 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 05:16:01.096684 | orchestrator | 2026-02-16 05:16:01.096759 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-16 05:16:01.096767 | orchestrator | Monday 16 February 2026 05:15:57 +0000 (0:00:04.106) 0:01:20.104 ******* 2026-02-16 05:16:01.096776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-16 05:16:01.096784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 05:16:01.096789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 05:16:01.096811 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:16:01.096820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-16 05:16:01.096842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 05:16:01.096850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 05:16:01.096857 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:16:01.096878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-16 05:16:01.096887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 05:16:01.096895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 05:16:01.096909 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:16:01.096914 | orchestrator | 2026-02-16 05:16:01.096919 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-16 05:16:01.096924 | orchestrator | Monday 16 February 2026 05:15:58 +0000 (0:00:01.641) 0:01:21.746 ******* 2026-02-16 05:16:01.096929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-16 05:16:01.096934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 05:16:01.096942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 05:16:01.096947 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:16:01.096957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-16 05:16:12.768156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 05:16:12.768253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 05:16:12.768261 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:16:12.768270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-16 05:16:12.768276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 05:16:12.768295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 05:16:12.768302 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:16:12.768308 | orchestrator | 2026-02-16 05:16:12.768316 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-16 05:16:12.768323 | orchestrator | Monday 16 February 2026 05:16:01 +0000 (0:00:02.159) 0:01:23.906 ******* 2026-02-16 05:16:12.768328 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-16 05:16:12.768336 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-16 05:16:12.768341 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-16 05:16:12.768347 | orchestrator | 2026-02-16 05:16:12.768353 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-16 05:16:12.768358 | orchestrator | Monday 16 February 2026 05:16:03 +0000 (0:00:02.474) 0:01:26.381 ******* 2026-02-16 05:16:12.768364 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-16 05:16:12.768370 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-16 05:16:12.768376 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-16 05:16:12.768381 | orchestrator | 2026-02-16 05:16:12.768402 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-16 05:16:12.768417 | orchestrator | Monday 16 February 2026 05:16:06 +0000 (0:00:02.483) 0:01:28.865 ******* 2026-02-16 05:16:12.768423 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-16 05:16:12.768429 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-16 05:16:12.768436 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-16 05:16:12.768441 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:16:12.768447 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-16 05:16:12.768453 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-16 05:16:12.768459 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:16:12.768465 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-16 05:16:12.768471 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:16:12.768476 | orchestrator | 2026-02-16 05:16:12.768483 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-16 05:16:12.768489 | orchestrator | Monday 16 February 2026 05:16:08 +0000 (0:00:02.489) 0:01:31.355 ******* 2026-02-16 05:16:12.768495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-16 05:16:12.768502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-16 05:16:12.768508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-16 05:16:12.768514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 05:16:12.768533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 05:16:16.242796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 05:16:16.242918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 05:16:16.242948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 05:16:16.242990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 05:16:16.243004 | orchestrator | 2026-02-16 05:16:16.243018 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-16 05:16:16.243031 | orchestrator | Monday 16 February 2026 05:16:12 +0000 (0:00:04.214) 0:01:35.569 ******* 2026-02-16 05:16:16.243051 | orchestrator | changed: [testbed-node-0] => { 2026-02-16 05:16:16.243070 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:16:16.243088 | orchestrator | } 2026-02-16 05:16:16.243107 | orchestrator | changed: [testbed-node-1] => { 2026-02-16 05:16:16.243124 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:16:16.243150 | orchestrator | } 2026-02-16 05:16:16.243169 | orchestrator | changed: [testbed-node-2] => { 2026-02-16 05:16:16.243187 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:16:16.243204 | orchestrator | } 2026-02-16 05:16:16.243223 | orchestrator | 2026-02-16 05:16:16.243243 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-16 05:16:16.243261 | orchestrator | Monday 16 February 2026 05:16:14 +0000 (0:00:01.477) 0:01:37.047 ******* 2026-02-16 05:16:16.243311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-16 05:16:16.243350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 05:16:16.243365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 05:16:16.243379 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:16:16.243393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-16 05:16:16.243406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 05:16:16.243419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 05:16:16.243432 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:16:16.243451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-16 05:16:16.243471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 05:16:16.243493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 05:16:21.344879 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:16:21.344999 | orchestrator | 2026-02-16 05:16:21.345013 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-16 05:16:21.345025 | orchestrator | Monday 16 February 2026 05:16:16 +0000 (0:00:01.997) 0:01:39.044 ******* 2026-02-16 05:16:21.345034 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:16:21.345043 | orchestrator | 2026-02-16 05:16:21.345064 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-16 05:16:21.345809 | orchestrator | Monday 16 February 2026 05:16:18 +0000 (0:00:01.804) 0:01:40.849 ******* 2026-02-16 05:16:21.345839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:16:21.345854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 05:16:21.345928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 05:16:21.345940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 05:16:21.345970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:16:21.345980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 05:16:21.345989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 05:16:21.345999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 05:16:21.346068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:16:21.346080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 05:16:21.346097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 05:16:23.080837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 05:16:23.080939 | orchestrator | 2026-02-16 05:16:23.080957 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-16 05:16:23.080971 | orchestrator | Monday 16 February 2026 05:16:22 +0000 (0:00:04.402) 0:01:45.252 ******* 2026-02-16 05:16:23.080985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:16:23.081037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 05:16:23.081050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 05:16:23.081063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 05:16:23.081074 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:16:23.081105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:16:23.081119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 05:16:23.081130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 05:16:23.081154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 05:16:23.081166 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:16:23.081177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:16:23.081189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-16 05:16:23.081208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-16 05:16:37.908977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-16 05:16:37.909128 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:16:37.909156 | orchestrator | 2026-02-16 05:16:37.909179 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-16 05:16:37.909192 | orchestrator | Monday 16 February 2026 05:16:24 +0000 (0:00:01.718) 0:01:46.971 ******* 2026-02-16 05:16:37.909204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:16:37.909220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:16:37.909233 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:16:37.909244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:16:37.909271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:16:37.909283 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:16:37.909294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:16:37.909305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:16:37.909316 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:16:37.909326 | orchestrator | 2026-02-16 05:16:37.909338 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-16 05:16:37.909349 | orchestrator | Monday 16 February 2026 05:16:26 +0000 (0:00:02.146) 0:01:49.117 ******* 2026-02-16 05:16:37.909360 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:16:37.909371 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:16:37.909382 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:16:37.909393 | orchestrator | 2026-02-16 05:16:37.909403 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-16 05:16:37.909414 | orchestrator | Monday 16 February 2026 05:16:28 +0000 (0:00:02.267) 0:01:51.384 ******* 2026-02-16 05:16:37.909425 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:16:37.909436 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:16:37.909446 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:16:37.909457 | orchestrator | 2026-02-16 05:16:37.909468 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-16 05:16:37.909478 | orchestrator | Monday 16 February 2026 05:16:31 +0000 (0:00:02.877) 0:01:54.262 ******* 2026-02-16 05:16:37.909489 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:16:37.909500 | orchestrator | 2026-02-16 05:16:37.909510 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-16 05:16:37.909521 | orchestrator | Monday 16 February 2026 05:16:33 +0000 (0:00:01.611) 0:01:55.874 ******* 2026-02-16 05:16:37.909601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:16:37.909635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 05:16:37.909657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 05:16:37.909687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:16:37.909701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 05:16:37.909721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 05:16:37.909769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:16:39.769509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 05:16:39.769693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 05:16:39.769719 | orchestrator | 2026-02-16 05:16:39.769735 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-16 05:16:39.769747 | orchestrator | Monday 16 February 2026 05:16:37 +0000 (0:00:04.832) 0:02:00.706 ******* 2026-02-16 05:16:39.769763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:16:39.769805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 05:16:39.769825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 05:16:39.769845 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:16:39.769900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:16:39.769924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 05:16:39.769945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 05:16:39.769976 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:16:39.769989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:16:39.770002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-16 05:16:39.770082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-16 05:16:56.280992 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:16:56.281127 | orchestrator | 2026-02-16 05:16:56.281148 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-16 05:16:56.281161 | orchestrator | Monday 16 February 2026 05:16:39 +0000 (0:00:01.868) 0:02:02.575 ******* 2026-02-16 05:16:56.281194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:16:56.281219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:16:56.281239 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:16:56.281258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:16:56.281277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:16:56.281325 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:16:56.281345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:16:56.281364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:16:56.281383 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:16:56.281402 | orchestrator | 2026-02-16 05:16:56.281458 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-16 05:16:56.281479 | orchestrator | Monday 16 February 2026 05:16:41 +0000 (0:00:01.861) 0:02:04.436 ******* 2026-02-16 05:16:56.281499 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:16:56.281519 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:16:56.281538 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:16:56.281620 | orchestrator | 2026-02-16 05:16:56.281635 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-16 05:16:56.281648 | orchestrator | Monday 16 February 2026 05:16:43 +0000 (0:00:02.288) 0:02:06.724 ******* 2026-02-16 05:16:56.281659 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:16:56.281670 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:16:56.281680 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:16:56.281691 | orchestrator | 2026-02-16 05:16:56.281701 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-16 05:16:56.281712 | orchestrator | Monday 16 February 2026 05:16:46 +0000 (0:00:02.838) 0:02:09.563 ******* 2026-02-16 05:16:56.281723 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:16:56.281733 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:16:56.281753 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:16:56.281771 | orchestrator | 2026-02-16 05:16:56.281789 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-16 05:16:56.281809 | orchestrator | Monday 16 February 2026 05:16:48 +0000 (0:00:01.346) 0:02:10.910 ******* 2026-02-16 05:16:56.281828 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:16:56.281847 | orchestrator | 2026-02-16 05:16:56.281866 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-16 05:16:56.281885 | orchestrator | Monday 16 February 2026 05:16:49 +0000 (0:00:01.831) 0:02:12.742 ******* 2026-02-16 05:16:56.281900 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-16 05:16:56.281950 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-16 05:16:56.281975 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-16 05:16:56.281987 | orchestrator | 2026-02-16 05:16:56.281998 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-16 05:16:56.282010 | orchestrator | Monday 16 February 2026 05:16:53 +0000 (0:00:03.690) 0:02:16.432 ******* 2026-02-16 05:16:56.282096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-16 05:16:56.282110 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:16:56.282132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-16 05:16:56.282150 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:16:56.282184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-16 05:17:08.062904 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:17:08.062994 | orchestrator | 2026-02-16 05:17:08.063017 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-16 05:17:08.063026 | orchestrator | Monday 16 February 2026 05:16:56 +0000 (0:00:02.648) 0:02:19.081 ******* 2026-02-16 05:17:08.063035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-16 05:17:08.063044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-16 05:17:08.063052 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:17:08.063059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-16 05:17:08.063065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-16 05:17:08.063072 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:17:08.063078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-16 05:17:08.063085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-16 05:17:08.063091 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:17:08.063097 | orchestrator | 2026-02-16 05:17:08.063104 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-16 05:17:08.063110 | orchestrator | Monday 16 February 2026 05:16:59 +0000 (0:00:02.767) 0:02:21.848 ******* 2026-02-16 05:17:08.063116 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:17:08.063122 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:17:08.063128 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:17:08.063134 | orchestrator | 2026-02-16 05:17:08.063140 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-16 05:17:08.063146 | orchestrator | Monday 16 February 2026 05:17:00 +0000 (0:00:01.454) 0:02:23.303 ******* 2026-02-16 05:17:08.063152 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:17:08.063159 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:17:08.063165 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:17:08.063189 | orchestrator | 2026-02-16 05:17:08.063196 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-16 05:17:08.063202 | orchestrator | Monday 16 February 2026 05:17:02 +0000 (0:00:02.308) 0:02:25.611 ******* 2026-02-16 05:17:08.063208 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:17:08.063215 | orchestrator | 2026-02-16 05:17:08.063221 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-16 05:17:08.063227 | orchestrator | Monday 16 February 2026 05:17:04 +0000 (0:00:01.661) 0:02:27.273 ******* 2026-02-16 05:17:08.063253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:17:08.063263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:17:08.063271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 05:17:08.063279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 05:17:08.063292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 05:17:08.063308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 05:17:10.138407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 05:17:10.138488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 05:17:10.138498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:17:10.138524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 05:17:10.138542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 05:17:10.138621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 05:17:10.138630 | orchestrator | 2026-02-16 05:17:10.138638 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-16 05:17:10.138644 | orchestrator | Monday 16 February 2026 05:17:09 +0000 (0:00:04.767) 0:02:32.041 ******* 2026-02-16 05:17:10.138651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:17:10.138657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 05:17:10.138670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 05:17:10.138679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 05:17:10.138684 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:17:10.138697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:17:21.356801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 05:17:21.356920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 05:17:21.356968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 05:17:21.356982 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:17:21.357013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:17:21.357028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 05:17:21.357059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-16 05:17:21.357078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-16 05:17:21.357110 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:17:21.357128 | orchestrator | 2026-02-16 05:17:21.357148 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-16 05:17:21.357168 | orchestrator | Monday 16 February 2026 05:17:11 +0000 (0:00:02.048) 0:02:34.089 ******* 2026-02-16 05:17:21.357186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:17:21.357205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:17:21.357225 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:17:21.357243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:17:21.357261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:17:21.357280 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:17:21.357300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:17:21.357330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:17:21.357351 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:17:21.357371 | orchestrator | 2026-02-16 05:17:21.357391 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-16 05:17:21.357410 | orchestrator | Monday 16 February 2026 05:17:13 +0000 (0:00:02.076) 0:02:36.166 ******* 2026-02-16 05:17:21.357429 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:17:21.357449 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:17:21.357468 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:17:21.357487 | orchestrator | 2026-02-16 05:17:21.357508 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-16 05:17:21.357528 | orchestrator | Monday 16 February 2026 05:17:15 +0000 (0:00:02.248) 0:02:38.414 ******* 2026-02-16 05:17:21.357548 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:17:21.357627 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:17:21.357641 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:17:21.357653 | orchestrator | 2026-02-16 05:17:21.357664 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-16 05:17:21.357674 | orchestrator | Monday 16 February 2026 05:17:18 +0000 (0:00:02.805) 0:02:41.220 ******* 2026-02-16 05:17:21.357685 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:17:21.357696 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:17:21.357707 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:17:21.357717 | orchestrator | 2026-02-16 05:17:21.357728 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-16 05:17:21.357739 | orchestrator | Monday 16 February 2026 05:17:19 +0000 (0:00:01.553) 0:02:42.774 ******* 2026-02-16 05:17:21.357749 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:17:21.357760 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:17:21.357783 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:17:26.559851 | orchestrator | 2026-02-16 05:17:26.560016 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-16 05:17:26.560049 | orchestrator | Monday 16 February 2026 05:17:21 +0000 (0:00:01.387) 0:02:44.161 ******* 2026-02-16 05:17:26.560069 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:17:26.560085 | orchestrator | 2026-02-16 05:17:26.560097 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-16 05:17:26.560108 | orchestrator | Monday 16 February 2026 05:17:23 +0000 (0:00:01.731) 0:02:45.893 ******* 2026-02-16 05:17:26.560127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:17:26.560174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 05:17:26.560198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 05:17:26.560237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 05:17:26.560250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 05:17:26.560309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 05:17:26.560323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:17:26.560338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-16 05:17:26.560351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 05:17:26.560369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 05:17:26.560383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 05:17:26.560411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 05:17:28.520457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 05:17:28.520622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-16 05:17:28.520655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:17:28.520691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 05:17:28.520726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 05:17:28.520761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 05:17:28.520791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 05:17:28.520813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 05:17:28.520833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-16 05:17:28.520854 | orchestrator | 2026-02-16 05:17:28.520877 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-16 05:17:28.520896 | orchestrator | Monday 16 February 2026 05:17:27 +0000 (0:00:04.825) 0:02:50.718 ******* 2026-02-16 05:17:28.520923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:17:28.520957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 05:17:28.520989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 05:17:29.729797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 05:17:29.729914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 05:17:29.729932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 05:17:29.729944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-16 05:17:29.729990 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:17:29.730006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:17:29.730090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 05:17:29.731155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 05:17:29.731241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 05:17:29.731254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 05:17:29.731285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 05:17:29.731297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-16 05:17:29.731308 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:17:29.731336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:17:45.715152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-16 05:17:45.715255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-16 05:17:45.715299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-16 05:17:45.715322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-16 05:17:45.715330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-16 05:17:45.715338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-16 05:17:45.715345 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:17:45.715355 | orchestrator | 2026-02-16 05:17:45.715363 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-16 05:17:45.715371 | orchestrator | Monday 16 February 2026 05:17:29 +0000 (0:00:01.814) 0:02:52.532 ******* 2026-02-16 05:17:45.715392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:17:45.715403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:17:45.715412 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:17:45.715419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:17:45.715426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:17:45.715438 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:17:45.715445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:17:45.715452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:17:45.715459 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:17:45.715465 | orchestrator | 2026-02-16 05:17:45.715472 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-16 05:17:45.715479 | orchestrator | Monday 16 February 2026 05:17:31 +0000 (0:00:02.042) 0:02:54.575 ******* 2026-02-16 05:17:45.715486 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:17:45.715493 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:17:45.715500 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:17:45.715507 | orchestrator | 2026-02-16 05:17:45.715518 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-16 05:17:45.715529 | orchestrator | Monday 16 February 2026 05:17:34 +0000 (0:00:03.230) 0:02:57.806 ******* 2026-02-16 05:17:45.715540 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:17:45.715549 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:17:45.715559 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:17:45.715590 | orchestrator | 2026-02-16 05:17:45.715602 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-16 05:17:45.715618 | orchestrator | Monday 16 February 2026 05:17:37 +0000 (0:00:02.870) 0:03:00.676 ******* 2026-02-16 05:17:45.715631 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:17:45.715639 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:17:45.715646 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:17:45.715653 | orchestrator | 2026-02-16 05:17:45.715659 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-16 05:17:45.715666 | orchestrator | Monday 16 February 2026 05:17:39 +0000 (0:00:01.417) 0:03:02.093 ******* 2026-02-16 05:17:45.715674 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:17:45.715682 | orchestrator | 2026-02-16 05:17:45.715690 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-16 05:17:45.715697 | orchestrator | Monday 16 February 2026 05:17:41 +0000 (0:00:01.994) 0:03:04.088 ******* 2026-02-16 05:17:45.715715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-16 05:17:46.849463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-16 05:17:46.849535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-16 05:17:46.849553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-16 05:17:46.849614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-16 05:17:46.849625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-16 05:17:49.708751 | orchestrator | 2026-02-16 05:17:49.708853 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-16 05:17:49.708870 | orchestrator | Monday 16 February 2026 05:17:46 +0000 (0:00:05.572) 0:03:09.660 ******* 2026-02-16 05:17:49.708904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-16 05:17:49.708922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-16 05:17:49.708961 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:17:49.709002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-16 05:17:49.709017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-16 05:17:49.709037 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:17:49.709063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-16 05:18:06.947528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-16 05:18:06.947792 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:18:06.947815 | orchestrator | 2026-02-16 05:18:06.947828 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-16 05:18:06.947841 | orchestrator | Monday 16 February 2026 05:17:50 +0000 (0:00:03.946) 0:03:13.606 ******* 2026-02-16 05:18:06.947854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-16 05:18:06.947868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-16 05:18:06.947896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-16 05:18:06.947909 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:18:06.947960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-16 05:18:06.947974 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:18:06.947985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-16 05:18:06.948009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-16 05:18:06.948022 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:18:06.948035 | orchestrator | 2026-02-16 05:18:06.948048 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-16 05:18:06.948061 | orchestrator | Monday 16 February 2026 05:17:54 +0000 (0:00:03.724) 0:03:17.331 ******* 2026-02-16 05:18:06.948074 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:18:06.948087 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:18:06.948098 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:18:06.948110 | orchestrator | 2026-02-16 05:18:06.948123 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-16 05:18:06.948136 | orchestrator | Monday 16 February 2026 05:17:56 +0000 (0:00:02.174) 0:03:19.506 ******* 2026-02-16 05:18:06.948148 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:18:06.948161 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:18:06.948173 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:18:06.948185 | orchestrator | 2026-02-16 05:18:06.948197 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-16 05:18:06.948210 | orchestrator | Monday 16 February 2026 05:17:59 +0000 (0:00:02.717) 0:03:22.223 ******* 2026-02-16 05:18:06.948222 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:18:06.948235 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:18:06.948248 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:18:06.948261 | orchestrator | 2026-02-16 05:18:06.948274 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-16 05:18:06.948286 | orchestrator | Monday 16 February 2026 05:18:00 +0000 (0:00:01.480) 0:03:23.704 ******* 2026-02-16 05:18:06.948299 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:18:06.948311 | orchestrator | 2026-02-16 05:18:06.948325 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-16 05:18:06.948345 | orchestrator | Monday 16 February 2026 05:18:02 +0000 (0:00:01.678) 0:03:25.382 ******* 2026-02-16 05:18:06.948376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:18:06.948411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:18:23.713650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:18:23.713772 | orchestrator | 2026-02-16 05:18:23.713791 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-16 05:18:23.713804 | orchestrator | Monday 16 February 2026 05:18:06 +0000 (0:00:04.370) 0:03:29.752 ******* 2026-02-16 05:18:23.713817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:18:23.713829 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:18:23.713842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:18:23.713853 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:18:23.713880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:18:23.713892 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:18:23.713903 | orchestrator | 2026-02-16 05:18:23.713938 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-16 05:18:23.713950 | orchestrator | Monday 16 February 2026 05:18:08 +0000 (0:00:01.677) 0:03:31.430 ******* 2026-02-16 05:18:23.713963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:18:23.713977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:18:23.713989 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:18:23.714076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:18:23.714091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:18:23.714102 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:18:23.714113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:18:23.714125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:18:23.714147 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:18:23.714160 | orchestrator | 2026-02-16 05:18:23.714174 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-16 05:18:23.714186 | orchestrator | Monday 16 February 2026 05:18:10 +0000 (0:00:01.506) 0:03:32.937 ******* 2026-02-16 05:18:23.714198 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:18:23.714212 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:18:23.714224 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:18:23.714236 | orchestrator | 2026-02-16 05:18:23.714249 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-16 05:18:23.714262 | orchestrator | Monday 16 February 2026 05:18:12 +0000 (0:00:02.187) 0:03:35.124 ******* 2026-02-16 05:18:23.714273 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:18:23.714288 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:18:23.714307 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:18:23.714325 | orchestrator | 2026-02-16 05:18:23.714342 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-16 05:18:23.714360 | orchestrator | Monday 16 February 2026 05:18:15 +0000 (0:00:02.784) 0:03:37.909 ******* 2026-02-16 05:18:23.714378 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:18:23.714399 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:18:23.714418 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:18:23.714435 | orchestrator | 2026-02-16 05:18:23.714448 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-16 05:18:23.714460 | orchestrator | Monday 16 February 2026 05:18:16 +0000 (0:00:01.332) 0:03:39.242 ******* 2026-02-16 05:18:23.714473 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:18:23.714485 | orchestrator | 2026-02-16 05:18:23.714496 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-16 05:18:23.714506 | orchestrator | Monday 16 February 2026 05:18:18 +0000 (0:00:01.641) 0:03:40.883 ******* 2026-02-16 05:18:23.714540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-16 05:18:25.460150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-16 05:18:25.460306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-16 05:18:25.460328 | orchestrator | 2026-02-16 05:18:25.460343 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-16 05:18:25.460355 | orchestrator | Monday 16 February 2026 05:18:23 +0000 (0:00:05.635) 0:03:46.518 ******* 2026-02-16 05:18:25.460367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-16 05:18:25.460387 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:18:25.460406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-16 05:18:34.411915 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:18:34.412107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-16 05:18:34.412143 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:18:34.412150 | orchestrator | 2026-02-16 05:18:34.412158 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-16 05:18:34.412166 | orchestrator | Monday 16 February 2026 05:18:25 +0000 (0:00:01.748) 0:03:48.267 ******* 2026-02-16 05:18:34.412174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-16 05:18:34.412184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-16 05:18:34.412193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-16 05:18:34.412202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-16 05:18:34.412209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-16 05:18:34.412217 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:18:34.412237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-16 05:18:34.412244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-16 05:18:34.412256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-16 05:18:34.412263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-16 05:18:34.412270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-16 05:18:34.412276 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:18:34.412283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-16 05:18:34.412293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-16 05:18:34.412299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-16 05:18:34.412306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-16 05:18:34.412312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-16 05:18:34.412318 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:18:34.412324 | orchestrator | 2026-02-16 05:18:34.412331 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-16 05:18:34.412338 | orchestrator | Monday 16 February 2026 05:18:27 +0000 (0:00:02.019) 0:03:50.286 ******* 2026-02-16 05:18:34.412345 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:18:34.412352 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:18:34.412359 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:18:34.412365 | orchestrator | 2026-02-16 05:18:34.412372 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-16 05:18:34.412378 | orchestrator | Monday 16 February 2026 05:18:29 +0000 (0:00:02.332) 0:03:52.619 ******* 2026-02-16 05:18:34.412385 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:18:34.412391 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:18:34.412398 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:18:34.412405 | orchestrator | 2026-02-16 05:18:34.412412 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-16 05:18:34.412425 | orchestrator | Monday 16 February 2026 05:18:32 +0000 (0:00:03.021) 0:03:55.640 ******* 2026-02-16 05:18:34.412432 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:18:34.412438 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:18:34.412444 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:18:34.412450 | orchestrator | 2026-02-16 05:18:34.412456 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-16 05:18:34.412462 | orchestrator | Monday 16 February 2026 05:18:34 +0000 (0:00:01.364) 0:03:57.004 ******* 2026-02-16 05:18:34.412473 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:18:44.705376 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:18:44.705485 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:18:44.705499 | orchestrator | 2026-02-16 05:18:44.705511 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-16 05:18:44.705523 | orchestrator | Monday 16 February 2026 05:18:35 +0000 (0:00:01.379) 0:03:58.384 ******* 2026-02-16 05:18:44.705533 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:18:44.705542 | orchestrator | 2026-02-16 05:18:44.705553 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-16 05:18:44.705562 | orchestrator | Monday 16 February 2026 05:18:37 +0000 (0:00:01.994) 0:04:00.378 ******* 2026-02-16 05:18:44.705628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-16 05:18:44.705662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 05:18:44.705675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 05:18:44.705686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-16 05:18:44.705736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 05:18:44.705749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 05:18:44.705779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-16 05:18:44.705791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 05:18:44.705801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 05:18:44.705820 | orchestrator | 2026-02-16 05:18:44.705837 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-16 05:18:44.705855 | orchestrator | Monday 16 February 2026 05:18:42 +0000 (0:00:04.936) 0:04:05.314 ******* 2026-02-16 05:18:44.705881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-16 05:18:46.552061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 05:18:46.552185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 05:18:46.552203 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:18:46.552220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-16 05:18:46.552265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 05:18:46.552285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 05:18:46.552305 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:18:46.552351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-16 05:18:46.552384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-16 05:18:46.552404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-16 05:18:46.552435 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:18:46.552455 | orchestrator | 2026-02-16 05:18:46.552477 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-16 05:18:46.552498 | orchestrator | Monday 16 February 2026 05:18:44 +0000 (0:00:02.195) 0:04:07.510 ******* 2026-02-16 05:18:46.552518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-16 05:18:46.552541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-16 05:18:46.552561 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:18:46.552613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-16 05:18:46.552632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-16 05:18:46.552650 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:18:46.552667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-16 05:18:46.552684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-16 05:18:46.552702 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:18:46.552720 | orchestrator | 2026-02-16 05:18:46.552740 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-16 05:18:46.552775 | orchestrator | Monday 16 February 2026 05:18:46 +0000 (0:00:01.846) 0:04:09.357 ******* 2026-02-16 05:19:01.966377 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:19:01.966496 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:19:01.966512 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:19:01.966524 | orchestrator | 2026-02-16 05:19:01.966538 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-16 05:19:01.966551 | orchestrator | Monday 16 February 2026 05:18:48 +0000 (0:00:02.400) 0:04:11.757 ******* 2026-02-16 05:19:01.966562 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:19:01.966621 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:19:01.966636 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:19:01.966647 | orchestrator | 2026-02-16 05:19:01.966658 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-16 05:19:01.966670 | orchestrator | Monday 16 February 2026 05:18:52 +0000 (0:00:03.209) 0:04:14.967 ******* 2026-02-16 05:19:01.966681 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:19:01.966693 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:19:01.966704 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:19:01.966714 | orchestrator | 2026-02-16 05:19:01.966725 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-16 05:19:01.966736 | orchestrator | Monday 16 February 2026 05:18:53 +0000 (0:00:01.391) 0:04:16.358 ******* 2026-02-16 05:19:01.966747 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:19:01.966783 | orchestrator | 2026-02-16 05:19:01.966794 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-16 05:19:01.966805 | orchestrator | Monday 16 February 2026 05:18:55 +0000 (0:00:01.765) 0:04:18.123 ******* 2026-02-16 05:19:01.966837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:19:01.966856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 05:19:01.966870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:19:01.966901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 05:19:01.966929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:19:01.966954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 05:19:01.966968 | orchestrator | 2026-02-16 05:19:01.966980 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-16 05:19:01.966994 | orchestrator | Monday 16 February 2026 05:19:00 +0000 (0:00:04.955) 0:04:23.078 ******* 2026-02-16 05:19:01.967007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:19:01.967030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 05:19:14.959242 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:19:14.959354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:19:14.959390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 05:19:14.959401 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:19:14.959410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:19:14.959419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-16 05:19:14.959427 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:19:14.959435 | orchestrator | 2026-02-16 05:19:14.959444 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-16 05:19:14.959454 | orchestrator | Monday 16 February 2026 05:19:01 +0000 (0:00:01.693) 0:04:24.772 ******* 2026-02-16 05:19:14.959476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:19:14.959495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:19:14.959505 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:19:14.959513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:19:14.959521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:19:14.959529 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:19:14.959541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:19:14.959550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:19:14.959558 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:19:14.959566 | orchestrator | 2026-02-16 05:19:14.959622 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-16 05:19:14.959631 | orchestrator | Monday 16 February 2026 05:19:03 +0000 (0:00:01.890) 0:04:26.663 ******* 2026-02-16 05:19:14.959639 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:19:14.959647 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:19:14.959655 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:19:14.959663 | orchestrator | 2026-02-16 05:19:14.959671 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-16 05:19:14.959679 | orchestrator | Monday 16 February 2026 05:19:06 +0000 (0:00:02.329) 0:04:28.993 ******* 2026-02-16 05:19:14.959687 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:19:14.959694 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:19:14.959702 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:19:14.959710 | orchestrator | 2026-02-16 05:19:14.959718 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-16 05:19:14.959725 | orchestrator | Monday 16 February 2026 05:19:09 +0000 (0:00:02.944) 0:04:31.937 ******* 2026-02-16 05:19:14.959734 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:19:14.959741 | orchestrator | 2026-02-16 05:19:14.959749 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-16 05:19:14.959757 | orchestrator | Monday 16 February 2026 05:19:11 +0000 (0:00:02.205) 0:04:34.143 ******* 2026-02-16 05:19:14.959766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:19:14.959782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 05:19:14.959800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 05:19:16.699539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 05:19:16.699695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:19:16.699714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:19:16.699748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 05:19:16.699762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 05:19:16.699793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 05:19:16.699814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 05:19:16.699826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 05:19:16.699837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 05:19:16.699865 | orchestrator | 2026-02-16 05:19:16.699888 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-16 05:19:16.699911 | orchestrator | Monday 16 February 2026 05:19:16 +0000 (0:00:04.755) 0:04:38.899 ******* 2026-02-16 05:19:16.699934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:19:16.699960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 05:19:19.709496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 05:19:19.709653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 05:19:19.709673 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:19:19.709688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:19:19.709729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 05:19:19.709742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 05:19:19.709771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 05:19:19.709783 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:19:19.709815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:19:19.709828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 05:19:19.709840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-16 05:19:19.709860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-16 05:19:19.709871 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:19:19.709882 | orchestrator | 2026-02-16 05:19:19.709895 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-16 05:19:19.709907 | orchestrator | Monday 16 February 2026 05:19:17 +0000 (0:00:01.726) 0:04:40.625 ******* 2026-02-16 05:19:19.709920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:19:19.709934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:19:19.709946 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:19:19.709957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:19:19.709976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:19:35.355747 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:19:35.355893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:19:35.355917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:19:35.355932 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:19:35.355945 | orchestrator | 2026-02-16 05:19:35.355958 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-16 05:19:35.355970 | orchestrator | Monday 16 February 2026 05:19:19 +0000 (0:00:01.888) 0:04:42.514 ******* 2026-02-16 05:19:35.355981 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:19:35.355992 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:19:35.356003 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:19:35.356014 | orchestrator | 2026-02-16 05:19:35.356025 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-16 05:19:35.356036 | orchestrator | Monday 16 February 2026 05:19:21 +0000 (0:00:02.284) 0:04:44.798 ******* 2026-02-16 05:19:35.356070 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:19:35.356082 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:19:35.356093 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:19:35.356103 | orchestrator | 2026-02-16 05:19:35.356114 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-16 05:19:35.356125 | orchestrator | Monday 16 February 2026 05:19:24 +0000 (0:00:02.872) 0:04:47.671 ******* 2026-02-16 05:19:35.356135 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:19:35.356146 | orchestrator | 2026-02-16 05:19:35.356157 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-16 05:19:35.356167 | orchestrator | Monday 16 February 2026 05:19:27 +0000 (0:00:02.615) 0:04:50.287 ******* 2026-02-16 05:19:35.356180 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 05:19:35.356193 | orchestrator | 2026-02-16 05:19:35.356205 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-16 05:19:35.356217 | orchestrator | Monday 16 February 2026 05:19:31 +0000 (0:00:04.333) 0:04:54.620 ******* 2026-02-16 05:19:35.356238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:19:35.356286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-16 05:19:35.356315 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:19:35.356340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:19:35.356374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-16 05:19:35.356394 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:19:35.356478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:19:38.812355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-16 05:19:38.812538 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:19:38.812569 | orchestrator | 2026-02-16 05:19:38.813283 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-16 05:19:38.813308 | orchestrator | Monday 16 February 2026 05:19:35 +0000 (0:00:03.531) 0:04:58.152 ******* 2026-02-16 05:19:38.813326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:19:38.813342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-16 05:19:38.813353 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:19:38.813407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:19:38.813446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-16 05:19:38.813459 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:19:38.813470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:19:38.813496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-16 05:19:53.753070 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:19:53.753210 | orchestrator | 2026-02-16 05:19:53.753236 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-16 05:19:53.753249 | orchestrator | Monday 16 February 2026 05:19:38 +0000 (0:00:03.470) 0:05:01.623 ******* 2026-02-16 05:19:53.753261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-16 05:19:53.753276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-16 05:19:53.753287 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:19:53.753297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-16 05:19:53.753308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-16 05:19:53.753318 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:19:53.753328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-16 05:19:53.753380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-16 05:19:53.753394 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:19:53.753411 | orchestrator | 2026-02-16 05:19:53.753428 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-16 05:19:53.753445 | orchestrator | Monday 16 February 2026 05:19:42 +0000 (0:00:03.421) 0:05:05.044 ******* 2026-02-16 05:19:53.753463 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:19:53.753500 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:19:53.753511 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:19:53.753521 | orchestrator | 2026-02-16 05:19:53.753532 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-16 05:19:53.753547 | orchestrator | Monday 16 February 2026 05:19:44 +0000 (0:00:02.763) 0:05:07.807 ******* 2026-02-16 05:19:53.753563 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:19:53.753610 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:19:53.753628 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:19:53.753644 | orchestrator | 2026-02-16 05:19:53.753662 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-16 05:19:53.753679 | orchestrator | Monday 16 February 2026 05:19:47 +0000 (0:00:02.482) 0:05:10.290 ******* 2026-02-16 05:19:53.753694 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:19:53.753707 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:19:53.753723 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:19:53.753748 | orchestrator | 2026-02-16 05:19:53.753769 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-16 05:19:53.753787 | orchestrator | Monday 16 February 2026 05:19:48 +0000 (0:00:01.446) 0:05:11.737 ******* 2026-02-16 05:19:53.753803 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:19:53.753818 | orchestrator | 2026-02-16 05:19:53.753834 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-16 05:19:53.753850 | orchestrator | Monday 16 February 2026 05:19:51 +0000 (0:00:02.209) 0:05:13.947 ******* 2026-02-16 05:19:53.753868 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-16 05:19:53.753885 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-16 05:19:53.753917 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-16 05:19:53.753933 | orchestrator | 2026-02-16 05:19:53.753948 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-16 05:19:53.753963 | orchestrator | Monday 16 February 2026 05:19:53 +0000 (0:00:02.502) 0:05:16.450 ******* 2026-02-16 05:19:53.754001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-16 05:20:08.489127 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:20:08.489255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-16 05:20:08.489287 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:20:08.489300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-16 05:20:08.489310 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:20:08.489321 | orchestrator | 2026-02-16 05:20:08.489332 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-16 05:20:08.489343 | orchestrator | Monday 16 February 2026 05:19:55 +0000 (0:00:01.725) 0:05:18.175 ******* 2026-02-16 05:20:08.489355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-16 05:20:08.489390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-16 05:20:08.489400 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:20:08.489410 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:20:08.489420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-16 05:20:08.489430 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:20:08.489440 | orchestrator | 2026-02-16 05:20:08.489450 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-16 05:20:08.489460 | orchestrator | Monday 16 February 2026 05:19:56 +0000 (0:00:01.467) 0:05:19.643 ******* 2026-02-16 05:20:08.489469 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:20:08.489479 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:20:08.489488 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:20:08.489498 | orchestrator | 2026-02-16 05:20:08.489507 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-16 05:20:08.489517 | orchestrator | Monday 16 February 2026 05:19:58 +0000 (0:00:01.443) 0:05:21.086 ******* 2026-02-16 05:20:08.489527 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:20:08.489536 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:20:08.489546 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:20:08.489555 | orchestrator | 2026-02-16 05:20:08.489565 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-16 05:20:08.489601 | orchestrator | Monday 16 February 2026 05:20:00 +0000 (0:00:02.164) 0:05:23.250 ******* 2026-02-16 05:20:08.489619 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:20:08.489636 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:20:08.489669 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:20:08.489685 | orchestrator | 2026-02-16 05:20:08.489697 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-16 05:20:08.489709 | orchestrator | Monday 16 February 2026 05:20:02 +0000 (0:00:01.714) 0:05:24.965 ******* 2026-02-16 05:20:08.489721 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:20:08.489733 | orchestrator | 2026-02-16 05:20:08.489744 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-16 05:20:08.489755 | orchestrator | Monday 16 February 2026 05:20:04 +0000 (0:00:02.034) 0:05:27.000 ******* 2026-02-16 05:20:08.489790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:20:08.489815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:08.489829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-16 05:20:08.489848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-16 05:20:08.489870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:08.658369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-16 05:20:08.658468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-16 05:20:08.658480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 05:20:08.658488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 05:20:08.658497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:08.658518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-16 05:20:08.658539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-16 05:20:08.658551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:08.658560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-16 05:20:08.658621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-16 05:20:08.658633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:20:08.658647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:08.891005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-16 05:20:08.891102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-16 05:20:08.891119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:08.891146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-16 05:20:08.891154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-16 05:20:08.891193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 05:20:08.891205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 05:20:08.891216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:08.891226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-16 05:20:08.891235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-16 05:20:08.891250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:08.891273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-16 05:20:09.137230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-16 05:20:09.137381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:20:09.137434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:09.137480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-16 05:20:09.137568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-16 05:20:09.137656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:09.137677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-16 05:20:09.137698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-16 05:20:09.137727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 05:20:09.137761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 05:20:09.137795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:11.361737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-16 05:20:11.361839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-16 05:20:11.361855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:11.361887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-16 05:20:11.361926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-16 05:20:11.361939 | orchestrator | 2026-02-16 05:20:11.361952 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-16 05:20:11.361965 | orchestrator | Monday 16 February 2026 05:20:10 +0000 (0:00:06.050) 0:05:33.050 ******* 2026-02-16 05:20:11.361996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:20:11.362011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:11.362092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-16 05:20:11.362115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-16 05:20:11.362137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:11.479402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-16 05:20:11.479506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-16 05:20:11.479523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 05:20:11.479558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:20:11.479671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 05:20:11.479713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:11.479727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-16 05:20:11.479741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:11.479767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-16 05:20:11.479780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-16 05:20:11.479792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-16 05:20:11.479811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:11.560330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:11.560463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-16 05:20:11.560513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-16 05:20:11.560528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-16 05:20:11.560541 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:20:11.560554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-16 05:20:11.560620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:20:11.560636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 05:20:11.560702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 05:20:11.560716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:11.560729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:11.560750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-16 05:20:12.818434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-16 05:20:12.818675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-16 05:20:12.819400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-16 05:20:12.819424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:12.819458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:12.819467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-16 05:20:12.819493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-16 05:20:12.819520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-16 05:20:12.819529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-16 05:20:12.819537 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:20:12.819547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-16 05:20:12.819555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-16 05:20:12.819596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:27.925694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-16 05:20:27.925804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-16 05:20:27.925817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-16 05:20:27.925829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-16 05:20:27.925838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-16 05:20:27.925845 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:20:27.925853 | orchestrator | 2026-02-16 05:20:27.925862 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-16 05:20:27.925869 | orchestrator | Monday 16 February 2026 05:20:12 +0000 (0:00:02.574) 0:05:35.625 ******* 2026-02-16 05:20:27.925901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:20:27.925923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:20:27.925931 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:20:27.925937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:20:27.925944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:20:27.925951 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:20:27.925957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:20:27.925968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:20:27.925975 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:20:27.925981 | orchestrator | 2026-02-16 05:20:27.925987 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-16 05:20:27.925994 | orchestrator | Monday 16 February 2026 05:20:15 +0000 (0:00:02.906) 0:05:38.531 ******* 2026-02-16 05:20:27.926000 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:20:27.926007 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:20:27.926060 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:20:27.926068 | orchestrator | 2026-02-16 05:20:27.926074 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-16 05:20:27.926081 | orchestrator | Monday 16 February 2026 05:20:17 +0000 (0:00:02.213) 0:05:40.745 ******* 2026-02-16 05:20:27.926087 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:20:27.926093 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:20:27.926099 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:20:27.926105 | orchestrator | 2026-02-16 05:20:27.926111 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-16 05:20:27.926117 | orchestrator | Monday 16 February 2026 05:20:20 +0000 (0:00:02.997) 0:05:43.743 ******* 2026-02-16 05:20:27.926124 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:20:27.926130 | orchestrator | 2026-02-16 05:20:27.926136 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-16 05:20:27.926142 | orchestrator | Monday 16 February 2026 05:20:23 +0000 (0:00:02.425) 0:05:46.168 ******* 2026-02-16 05:20:27.926149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-16 05:20:27.926180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-16 05:20:44.952705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-16 05:20:44.952859 | orchestrator | 2026-02-16 05:20:44.952890 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-16 05:20:44.952911 | orchestrator | Monday 16 February 2026 05:20:27 +0000 (0:00:04.561) 0:05:50.729 ******* 2026-02-16 05:20:44.952932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-16 05:20:44.952952 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:20:44.952974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-16 05:20:44.953027 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:20:44.953078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-16 05:20:44.953100 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:20:44.953119 | orchestrator | 2026-02-16 05:20:44.953138 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-16 05:20:44.953157 | orchestrator | Monday 16 February 2026 05:20:29 +0000 (0:00:01.627) 0:05:52.357 ******* 2026-02-16 05:20:44.953178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-16 05:20:44.953211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-16 05:20:44.953226 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:20:44.953237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-16 05:20:44.953249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-16 05:20:44.953260 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:20:44.953270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-16 05:20:44.953282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-16 05:20:44.953303 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:20:44.953323 | orchestrator | 2026-02-16 05:20:44.953340 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-16 05:20:44.953358 | orchestrator | Monday 16 February 2026 05:20:31 +0000 (0:00:01.898) 0:05:54.256 ******* 2026-02-16 05:20:44.953377 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:20:44.953397 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:20:44.953417 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:20:44.953437 | orchestrator | 2026-02-16 05:20:44.953455 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-16 05:20:44.953468 | orchestrator | Monday 16 February 2026 05:20:33 +0000 (0:00:02.265) 0:05:56.521 ******* 2026-02-16 05:20:44.953479 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:20:44.953490 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:20:44.953500 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:20:44.953511 | orchestrator | 2026-02-16 05:20:44.953521 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-16 05:20:44.953532 | orchestrator | Monday 16 February 2026 05:20:36 +0000 (0:00:03.012) 0:05:59.534 ******* 2026-02-16 05:20:44.953543 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:20:44.953553 | orchestrator | 2026-02-16 05:20:44.953564 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-16 05:20:44.953603 | orchestrator | Monday 16 February 2026 05:20:39 +0000 (0:00:02.361) 0:06:01.896 ******* 2026-02-16 05:20:44.953629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:20:46.073060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:20:46.073167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:20:46.073205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:20:46.073239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:20:46.073259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 05:20:46.073273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 05:20:46.073292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 05:20:46.073304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:20:46.073316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 05:20:46.073336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 05:20:46.825737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 05:20:46.825893 | orchestrator | 2026-02-16 05:20:46.825910 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-16 05:20:46.825922 | orchestrator | Monday 16 February 2026 05:20:46 +0000 (0:00:06.980) 0:06:08.877 ******* 2026-02-16 05:20:46.825937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:20:46.825950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:20:46.825962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 05:20:46.825991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 05:20:46.826002 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:20:46.826078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:20:46.826102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:20:46.826113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 05:20:46.826124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 05:20:46.826135 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:20:46.826161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:21:06.055096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:21:06.055241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-16 05:21:06.055260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-16 05:21:06.055273 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:21:06.055286 | orchestrator | 2026-02-16 05:21:06.055298 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-16 05:21:06.055310 | orchestrator | Monday 16 February 2026 05:20:47 +0000 (0:00:01.894) 0:06:10.771 ******* 2026-02-16 05:21:06.055321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:21:06.055334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:21:06.055346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:21:06.055358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:21:06.055402 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:21:06.055423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:21:06.055465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:21:06.055476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:21:06.055486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:21:06.055496 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:21:06.055506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:21:06.055516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:21:06.055526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:21:06.055536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:21:06.055545 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:21:06.055555 | orchestrator | 2026-02-16 05:21:06.055565 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-16 05:21:06.055652 | orchestrator | Monday 16 February 2026 05:20:50 +0000 (0:00:02.539) 0:06:13.310 ******* 2026-02-16 05:21:06.055666 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:21:06.055679 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:21:06.055691 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:21:06.055702 | orchestrator | 2026-02-16 05:21:06.055713 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-16 05:21:06.055725 | orchestrator | Monday 16 February 2026 05:20:52 +0000 (0:00:02.295) 0:06:15.606 ******* 2026-02-16 05:21:06.055736 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:21:06.055747 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:21:06.055758 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:21:06.055769 | orchestrator | 2026-02-16 05:21:06.055780 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-16 05:21:06.055793 | orchestrator | Monday 16 February 2026 05:20:55 +0000 (0:00:03.157) 0:06:18.764 ******* 2026-02-16 05:21:06.055804 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:21:06.055815 | orchestrator | 2026-02-16 05:21:06.055827 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-16 05:21:06.055847 | orchestrator | Monday 16 February 2026 05:20:58 +0000 (0:00:02.749) 0:06:21.513 ******* 2026-02-16 05:21:06.055860 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-16 05:21:06.055872 | orchestrator | 2026-02-16 05:21:06.055884 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-16 05:21:06.055895 | orchestrator | Monday 16 February 2026 05:21:00 +0000 (0:00:01.726) 0:06:23.240 ******* 2026-02-16 05:21:06.055909 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-16 05:21:06.055927 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-16 05:21:06.055950 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-16 05:21:25.907890 | orchestrator | 2026-02-16 05:21:25.907986 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-16 05:21:25.907999 | orchestrator | Monday 16 February 2026 05:21:06 +0000 (0:00:05.616) 0:06:28.857 ******* 2026-02-16 05:21:25.908012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-16 05:21:25.908023 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:21:25.908033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-16 05:21:25.908042 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:21:25.908050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-16 05:21:25.908079 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:21:25.908086 | orchestrator | 2026-02-16 05:21:25.908093 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-16 05:21:25.908101 | orchestrator | Monday 16 February 2026 05:21:08 +0000 (0:00:02.504) 0:06:31.362 ******* 2026-02-16 05:21:25.908109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-16 05:21:25.908120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-16 05:21:25.908129 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:21:25.908137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-16 05:21:25.908145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-16 05:21:25.908153 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:21:25.908161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-16 05:21:25.908182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-16 05:21:25.908190 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:21:25.908198 | orchestrator | 2026-02-16 05:21:25.908206 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-16 05:21:25.908215 | orchestrator | Monday 16 February 2026 05:21:11 +0000 (0:00:02.642) 0:06:34.004 ******* 2026-02-16 05:21:25.908223 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:21:25.908232 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:21:25.908240 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:21:25.908248 | orchestrator | 2026-02-16 05:21:25.908256 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-16 05:21:25.908264 | orchestrator | Monday 16 February 2026 05:21:14 +0000 (0:00:03.807) 0:06:37.811 ******* 2026-02-16 05:21:25.908272 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:21:25.908280 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:21:25.908302 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:21:25.908310 | orchestrator | 2026-02-16 05:21:25.908318 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-16 05:21:25.908326 | orchestrator | Monday 16 February 2026 05:21:19 +0000 (0:00:04.060) 0:06:41.872 ******* 2026-02-16 05:21:25.908335 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-16 05:21:25.908344 | orchestrator | 2026-02-16 05:21:25.908352 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-16 05:21:25.908360 | orchestrator | Monday 16 February 2026 05:21:20 +0000 (0:00:01.710) 0:06:43.583 ******* 2026-02-16 05:21:25.908369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-16 05:21:25.908385 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:21:25.908394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-16 05:21:25.908402 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:21:25.908410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-16 05:21:25.908418 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:21:25.908426 | orchestrator | 2026-02-16 05:21:25.908434 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-16 05:21:25.908442 | orchestrator | Monday 16 February 2026 05:21:23 +0000 (0:00:02.563) 0:06:46.146 ******* 2026-02-16 05:21:25.908450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-16 05:21:25.908459 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:21:25.908470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-16 05:21:25.908479 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:21:25.908492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-16 05:21:58.700866 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:21:58.701029 | orchestrator | 2026-02-16 05:21:58.701049 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-16 05:21:58.701062 | orchestrator | Monday 16 February 2026 05:21:25 +0000 (0:00:02.555) 0:06:48.701 ******* 2026-02-16 05:21:58.701075 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:21:58.701108 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:21:58.701119 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:21:58.701130 | orchestrator | 2026-02-16 05:21:58.701141 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-16 05:21:58.701152 | orchestrator | Monday 16 February 2026 05:21:28 +0000 (0:00:02.452) 0:06:51.154 ******* 2026-02-16 05:21:58.701163 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:21:58.701174 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:21:58.701185 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:21:58.701195 | orchestrator | 2026-02-16 05:21:58.701206 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-16 05:21:58.701217 | orchestrator | Monday 16 February 2026 05:21:32 +0000 (0:00:03.768) 0:06:54.922 ******* 2026-02-16 05:21:58.701228 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:21:58.701238 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:21:58.701249 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:21:58.701259 | orchestrator | 2026-02-16 05:21:58.701270 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-16 05:21:58.701281 | orchestrator | Monday 16 February 2026 05:21:36 +0000 (0:00:03.990) 0:06:58.913 ******* 2026-02-16 05:21:58.701292 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-16 05:21:58.701304 | orchestrator | 2026-02-16 05:21:58.701315 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-16 05:21:58.701331 | orchestrator | Monday 16 February 2026 05:21:38 +0000 (0:00:01.975) 0:07:00.889 ******* 2026-02-16 05:21:58.701347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-16 05:21:58.701363 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:21:58.701376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-16 05:21:58.701389 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:21:58.701402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-16 05:21:58.701414 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:21:58.701430 | orchestrator | 2026-02-16 05:21:58.701449 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-16 05:21:58.701468 | orchestrator | Monday 16 February 2026 05:21:40 +0000 (0:00:02.333) 0:07:03.223 ******* 2026-02-16 05:21:58.701506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-16 05:21:58.701538 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:21:58.701608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-16 05:21:58.701629 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:21:58.701650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-16 05:21:58.701669 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:21:58.701692 | orchestrator | 2026-02-16 05:21:58.701713 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-16 05:21:58.701732 | orchestrator | Monday 16 February 2026 05:21:42 +0000 (0:00:02.263) 0:07:05.486 ******* 2026-02-16 05:21:58.701750 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:21:58.701787 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:21:58.701807 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:21:58.701824 | orchestrator | 2026-02-16 05:21:58.701841 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-16 05:21:58.701861 | orchestrator | Monday 16 February 2026 05:21:44 +0000 (0:00:02.212) 0:07:07.699 ******* 2026-02-16 05:21:58.701877 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:21:58.701894 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:21:58.701912 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:21:58.701942 | orchestrator | 2026-02-16 05:21:58.701959 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-16 05:21:58.701977 | orchestrator | Monday 16 February 2026 05:21:48 +0000 (0:00:03.247) 0:07:10.946 ******* 2026-02-16 05:21:58.701993 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:21:58.702010 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:21:58.702116 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:21:58.702132 | orchestrator | 2026-02-16 05:21:58.702157 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-16 05:21:58.702175 | orchestrator | Monday 16 February 2026 05:21:52 +0000 (0:00:03.921) 0:07:14.867 ******* 2026-02-16 05:21:58.702192 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:21:58.702217 | orchestrator | 2026-02-16 05:21:58.702233 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-16 05:21:58.702251 | orchestrator | Monday 16 February 2026 05:21:54 +0000 (0:00:02.181) 0:07:17.049 ******* 2026-02-16 05:21:58.702270 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 05:21:58.702315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 05:21:58.702349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 05:21:59.814212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 05:21:59.814320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 05:21:59.814335 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 05:21:59.814373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 05:21:59.814398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 05:21:59.814425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 05:21:59.814435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 05:21:59.814445 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-16 05:21:59.814455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 05:21:59.814472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 05:21:59.814487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 05:21:59.814514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 05:21:59.814534 | orchestrator | 2026-02-16 05:21:59.814551 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-16 05:22:00.602630 | orchestrator | Monday 16 February 2026 05:21:59 +0000 (0:00:05.574) 0:07:22.623 ******* 2026-02-16 05:22:00.602758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-16 05:22:00.602788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 05:22:00.602809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 05:22:00.602856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 05:22:00.602876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 05:22:00.602894 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:22:00.602990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-16 05:22:00.603015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 05:22:00.603033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 05:22:00.603064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 05:22:00.603083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 05:22:00.603101 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:22:00.603127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-16 05:22:00.603159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-16 05:22:18.672936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-16 05:22:18.673048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-16 05:22:18.673083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-16 05:22:18.673090 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:22:18.673098 | orchestrator | 2026-02-16 05:22:18.673106 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-16 05:22:18.673115 | orchestrator | Monday 16 February 2026 05:22:01 +0000 (0:00:01.911) 0:07:24.535 ******* 2026-02-16 05:22:18.673123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-16 05:22:18.673132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-16 05:22:18.673140 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:22:18.673147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-16 05:22:18.673165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-16 05:22:18.673172 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:22:18.673178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-16 05:22:18.673184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-16 05:22:18.673191 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:22:18.673197 | orchestrator | 2026-02-16 05:22:18.673204 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-16 05:22:18.673210 | orchestrator | Monday 16 February 2026 05:22:03 +0000 (0:00:02.059) 0:07:26.595 ******* 2026-02-16 05:22:18.673217 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:22:18.673224 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:22:18.673230 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:22:18.673236 | orchestrator | 2026-02-16 05:22:18.673243 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-16 05:22:18.673249 | orchestrator | Monday 16 February 2026 05:22:06 +0000 (0:00:02.306) 0:07:28.902 ******* 2026-02-16 05:22:18.673255 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:22:18.673261 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:22:18.673279 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:22:18.673287 | orchestrator | 2026-02-16 05:22:18.673293 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-16 05:22:18.673300 | orchestrator | Monday 16 February 2026 05:22:09 +0000 (0:00:03.017) 0:07:31.920 ******* 2026-02-16 05:22:18.673312 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:22:18.673319 | orchestrator | 2026-02-16 05:22:18.673325 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-16 05:22:18.673332 | orchestrator | Monday 16 February 2026 05:22:11 +0000 (0:00:02.663) 0:07:34.584 ******* 2026-02-16 05:22:18.673339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:22:18.673349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:22:18.673359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:22:18.673371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-16 05:22:20.905837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-16 05:22:20.905946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-16 05:22:20.905965 | orchestrator | 2026-02-16 05:22:20.905980 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-16 05:22:20.905993 | orchestrator | Monday 16 February 2026 05:22:18 +0000 (0:00:06.891) 0:07:41.476 ******* 2026-02-16 05:22:20.906077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:22:20.906114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-16 05:22:20.906155 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:22:20.906169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:22:20.906187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-16 05:22:20.906199 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:22:20.906211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:22:20.906241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-16 05:22:31.778278 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:22:31.778422 | orchestrator | 2026-02-16 05:22:31.778453 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-16 05:22:31.778477 | orchestrator | Monday 16 February 2026 05:22:20 +0000 (0:00:02.230) 0:07:43.706 ******* 2026-02-16 05:22:31.778500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:22:31.778526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-16 05:22:31.778663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-16 05:22:31.778694 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:22:31.778717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:22:31.778738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-16 05:22:31.778780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-16 05:22:31.778803 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:22:31.778824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:22:31.778843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-16 05:22:31.778883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-16 05:22:31.778897 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:22:31.778910 | orchestrator | 2026-02-16 05:22:31.778922 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-16 05:22:31.778934 | orchestrator | Monday 16 February 2026 05:22:22 +0000 (0:00:01.821) 0:07:45.528 ******* 2026-02-16 05:22:31.778944 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:22:31.778955 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:22:31.778965 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:22:31.778976 | orchestrator | 2026-02-16 05:22:31.778987 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-16 05:22:31.778997 | orchestrator | Monday 16 February 2026 05:22:24 +0000 (0:00:01.490) 0:07:47.019 ******* 2026-02-16 05:22:31.779008 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:22:31.779018 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:22:31.779029 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:22:31.779040 | orchestrator | 2026-02-16 05:22:31.779050 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-16 05:22:31.779061 | orchestrator | Monday 16 February 2026 05:22:26 +0000 (0:00:02.478) 0:07:49.497 ******* 2026-02-16 05:22:31.779071 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:22:31.779083 | orchestrator | 2026-02-16 05:22:31.779093 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-16 05:22:31.779104 | orchestrator | Monday 16 February 2026 05:22:29 +0000 (0:00:02.529) 0:07:52.027 ******* 2026-02-16 05:22:31.779142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-16 05:22:31.779159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 05:22:31.779172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:31.779199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:31.779212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 05:22:31.779232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-16 05:22:33.797124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 05:22:33.797246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-16 05:22:33.797323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:33.797357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:33.797379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 05:22:33.797398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 05:22:33.797440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:33.797459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:33.797477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 05:22:33.797517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:22:33.797538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-16 05:22:33.797557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:33.797622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:35.950743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:22:35.950905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-16 05:22:35.950923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:22:35.950935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-16 05:22:35.950964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:35.950976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-16 05:22:35.950994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:35.951009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:35.951019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-16 05:22:35.951030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:35.951040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-16 05:22:35.951051 | orchestrator | 2026-02-16 05:22:35.951063 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-16 05:22:35.951074 | orchestrator | Monday 16 February 2026 05:22:34 +0000 (0:00:05.735) 0:07:57.763 ******* 2026-02-16 05:22:35.951093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-16 05:22:36.157409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 05:22:36.157528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:36.157545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:36.157558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 05:22:36.157618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:22:36.157651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-16 05:22:36.157686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:36.157704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:36.157717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-16 05:22:36.157729 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:22:36.157743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-16 05:22:36.157755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 05:22:36.157773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:36.157841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:37.369849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 05:22:37.369973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:22:37.370000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-16 05:22:37.370079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:37.370126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:37.370227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-16 05:22:37.370248 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:22:37.370354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-16 05:22:37.370369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-16 05:22:37.370378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:37.370387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:37.370409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-16 05:22:37.370452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:22:49.786127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-16 05:22:49.786212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:49.786222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:22:49.786228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-16 05:22:49.786250 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:22:49.786258 | orchestrator | 2026-02-16 05:22:49.786264 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-16 05:22:49.786271 | orchestrator | Monday 16 February 2026 05:22:37 +0000 (0:00:02.416) 0:08:00.180 ******* 2026-02-16 05:22:49.786277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-16 05:22:49.786285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-16 05:22:49.786293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:22:49.786310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:22:49.786317 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:22:49.786326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-16 05:22:49.786332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-16 05:22:49.786337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:22:49.786342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:22:49.786347 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:22:49.786353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-16 05:22:49.786362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-16 05:22:49.786368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:22:49.786373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-16 05:22:49.786378 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:22:49.786383 | orchestrator | 2026-02-16 05:22:49.786388 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-16 05:22:49.786394 | orchestrator | Monday 16 February 2026 05:22:39 +0000 (0:00:01.853) 0:08:02.033 ******* 2026-02-16 05:22:49.786399 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:22:49.786404 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:22:49.786409 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:22:49.786413 | orchestrator | 2026-02-16 05:22:49.786418 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-16 05:22:49.786423 | orchestrator | Monday 16 February 2026 05:22:41 +0000 (0:00:02.174) 0:08:04.208 ******* 2026-02-16 05:22:49.786429 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:22:49.786434 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:22:49.786439 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:22:49.786444 | orchestrator | 2026-02-16 05:22:49.786449 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-16 05:22:49.786454 | orchestrator | Monday 16 February 2026 05:22:43 +0000 (0:00:02.292) 0:08:06.500 ******* 2026-02-16 05:22:49.786459 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:22:49.786464 | orchestrator | 2026-02-16 05:22:49.786469 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-16 05:22:49.786474 | orchestrator | Monday 16 February 2026 05:22:46 +0000 (0:00:02.334) 0:08:08.834 ******* 2026-02-16 05:22:49.786486 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 05:23:06.826894 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 05:23:06.826993 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 05:23:06.827006 | orchestrator | 2026-02-16 05:23:06.827016 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-16 05:23:06.827025 | orchestrator | Monday 16 February 2026 05:22:49 +0000 (0:00:03.753) 0:08:12.588 ******* 2026-02-16 05:23:06.827034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-16 05:23:06.827042 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:23:06.827081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-16 05:23:06.827110 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:23:06.827118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-16 05:23:06.827125 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:23:06.827133 | orchestrator | 2026-02-16 05:23:06.827140 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-16 05:23:06.827148 | orchestrator | Monday 16 February 2026 05:22:51 +0000 (0:00:01.535) 0:08:14.123 ******* 2026-02-16 05:23:06.827156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-16 05:23:06.827164 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:23:06.827171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-16 05:23:06.827178 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:23:06.827186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-16 05:23:06.827193 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:23:06.827200 | orchestrator | 2026-02-16 05:23:06.827208 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-16 05:23:06.827215 | orchestrator | Monday 16 February 2026 05:22:52 +0000 (0:00:01.463) 0:08:15.586 ******* 2026-02-16 05:23:06.827222 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:23:06.827229 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:23:06.827236 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:23:06.827243 | orchestrator | 2026-02-16 05:23:06.827250 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-16 05:23:06.827257 | orchestrator | Monday 16 February 2026 05:22:54 +0000 (0:00:01.951) 0:08:17.538 ******* 2026-02-16 05:23:06.827264 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:23:06.827271 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:23:06.827279 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:23:06.827286 | orchestrator | 2026-02-16 05:23:06.827293 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-16 05:23:06.827300 | orchestrator | Monday 16 February 2026 05:22:57 +0000 (0:00:02.334) 0:08:19.872 ******* 2026-02-16 05:23:06.827307 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:23:06.827314 | orchestrator | 2026-02-16 05:23:06.827321 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-16 05:23:06.827328 | orchestrator | Monday 16 February 2026 05:22:59 +0000 (0:00:02.429) 0:08:22.302 ******* 2026-02-16 05:23:06.827340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-16 05:23:06.827362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-16 05:23:08.479939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-16 05:23:08.480046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-16 05:23:08.480129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-16 05:23:08.480245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-16 05:23:08.480263 | orchestrator | 2026-02-16 05:23:08.480277 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-16 05:23:08.480289 | orchestrator | Monday 16 February 2026 05:23:06 +0000 (0:00:07.331) 0:08:29.633 ******* 2026-02-16 05:23:08.480302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-16 05:23:08.480314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-16 05:23:08.480337 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:23:08.480349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-16 05:23:08.480371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-16 05:23:29.955822 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:23:29.956017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-16 05:23:29.956105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-16 05:23:29.956168 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:23:29.956189 | orchestrator | 2026-02-16 05:23:29.956209 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-16 05:23:29.956228 | orchestrator | Monday 16 February 2026 05:23:08 +0000 (0:00:01.655) 0:08:31.288 ******* 2026-02-16 05:23:29.956248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-16 05:23:29.956270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-16 05:23:29.956290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-16 05:23:29.956309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-16 05:23:29.956326 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:23:29.956344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-16 05:23:29.956362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-16 05:23:29.956404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-16 05:23:29.956423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-16 05:23:29.956441 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:23:29.956459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-16 05:23:29.956489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-16 05:23:29.956506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-16 05:23:29.956522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-16 05:23:29.956538 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:23:29.956554 | orchestrator | 2026-02-16 05:23:29.956595 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-16 05:23:29.956614 | orchestrator | Monday 16 February 2026 05:23:10 +0000 (0:00:02.235) 0:08:33.524 ******* 2026-02-16 05:23:29.956631 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:23:29.956647 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:23:29.956663 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:23:29.956680 | orchestrator | 2026-02-16 05:23:29.956703 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-16 05:23:29.956720 | orchestrator | Monday 16 February 2026 05:23:13 +0000 (0:00:02.343) 0:08:35.868 ******* 2026-02-16 05:23:29.956737 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:23:29.956753 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:23:29.956768 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:23:29.956784 | orchestrator | 2026-02-16 05:23:29.956800 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-16 05:23:29.956816 | orchestrator | Monday 16 February 2026 05:23:16 +0000 (0:00:03.088) 0:08:38.957 ******* 2026-02-16 05:23:29.956832 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:23:29.956849 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:23:29.956865 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:23:29.956882 | orchestrator | 2026-02-16 05:23:29.956898 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-16 05:23:29.956915 | orchestrator | Monday 16 February 2026 05:23:17 +0000 (0:00:01.351) 0:08:40.308 ******* 2026-02-16 05:23:29.956931 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:23:29.956948 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:23:29.956964 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:23:29.956981 | orchestrator | 2026-02-16 05:23:29.956998 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-16 05:23:29.957015 | orchestrator | Monday 16 February 2026 05:23:18 +0000 (0:00:01.362) 0:08:41.671 ******* 2026-02-16 05:23:29.957032 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:23:29.957049 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:23:29.957065 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:23:29.957083 | orchestrator | 2026-02-16 05:23:29.957100 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-16 05:23:29.957116 | orchestrator | Monday 16 February 2026 05:23:20 +0000 (0:00:01.663) 0:08:43.335 ******* 2026-02-16 05:23:29.957133 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:23:29.957149 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:23:29.957166 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:23:29.957182 | orchestrator | 2026-02-16 05:23:29.957198 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-16 05:23:29.957215 | orchestrator | Monday 16 February 2026 05:23:21 +0000 (0:00:01.413) 0:08:44.749 ******* 2026-02-16 05:23:29.957232 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:23:29.957249 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:23:29.957282 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:23:29.957298 | orchestrator | 2026-02-16 05:23:29.957314 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-02-16 05:23:29.957331 | orchestrator | Monday 16 February 2026 05:23:23 +0000 (0:00:01.335) 0:08:46.084 ******* 2026-02-16 05:23:29.957348 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:23:29.957365 | orchestrator | 2026-02-16 05:23:29.957383 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-16 05:23:29.957399 | orchestrator | Monday 16 February 2026 05:23:25 +0000 (0:00:02.676) 0:08:48.761 ******* 2026-02-16 05:23:29.957433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-16 05:23:34.174839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-16 05:23:34.174916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-16 05:23:34.174933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 05:23:34.174938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 05:23:34.174957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-16 05:23:34.174963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 05:23:34.174978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 05:23:34.174983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-16 05:23:34.174987 | orchestrator | 2026-02-16 05:23:34.174993 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-16 05:23:34.174997 | orchestrator | Monday 16 February 2026 05:23:29 +0000 (0:00:04.000) 0:08:52.761 ******* 2026-02-16 05:23:34.175003 | orchestrator | changed: [testbed-node-0] => { 2026-02-16 05:23:34.175008 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:23:34.175012 | orchestrator | } 2026-02-16 05:23:34.175016 | orchestrator | changed: [testbed-node-1] => { 2026-02-16 05:23:34.175019 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:23:34.175023 | orchestrator | } 2026-02-16 05:23:34.175027 | orchestrator | changed: [testbed-node-2] => { 2026-02-16 05:23:34.175031 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:23:34.175034 | orchestrator | } 2026-02-16 05:23:34.175038 | orchestrator | 2026-02-16 05:23:34.175042 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-16 05:23:34.175046 | orchestrator | Monday 16 February 2026 05:23:31 +0000 (0:00:01.444) 0:08:54.206 ******* 2026-02-16 05:23:34.175053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-16 05:23:34.175060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 05:23:34.175064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 05:23:34.175068 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:23:34.175072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-16 05:23:34.175081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 05:25:34.593392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 05:25:34.595158 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:25:34.595211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-16 05:25:34.595244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-16 05:25:34.595252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-16 05:25:34.595259 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:25:34.595266 | orchestrator | 2026-02-16 05:25:34.595275 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-16 05:25:34.595283 | orchestrator | Monday 16 February 2026 05:23:34 +0000 (0:00:02.773) 0:08:56.980 ******* 2026-02-16 05:25:34.595289 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:25:34.595296 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:25:34.595303 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:25:34.595309 | orchestrator | 2026-02-16 05:25:34.595318 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-16 05:25:34.595329 | orchestrator | Monday 16 February 2026 05:23:35 +0000 (0:00:01.746) 0:08:58.727 ******* 2026-02-16 05:25:34.595340 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:25:34.595350 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:25:34.595361 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:25:34.595372 | orchestrator | 2026-02-16 05:25:34.595383 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-16 05:25:34.595395 | orchestrator | Monday 16 February 2026 05:23:37 +0000 (0:00:01.399) 0:09:00.126 ******* 2026-02-16 05:25:34.595406 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:25:34.595418 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:25:34.595425 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:25:34.595431 | orchestrator | 2026-02-16 05:25:34.595437 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-16 05:25:34.595443 | orchestrator | Monday 16 February 2026 05:23:44 +0000 (0:00:07.057) 0:09:07.184 ******* 2026-02-16 05:25:34.595449 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:25:34.595455 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:25:34.595461 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:25:34.595468 | orchestrator | 2026-02-16 05:25:34.595474 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-16 05:25:34.595480 | orchestrator | Monday 16 February 2026 05:23:51 +0000 (0:00:07.471) 0:09:14.656 ******* 2026-02-16 05:25:34.595486 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:25:34.595492 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:25:34.595498 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:25:34.595504 | orchestrator | 2026-02-16 05:25:34.595511 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-16 05:25:34.595517 | orchestrator | Monday 16 February 2026 05:23:59 +0000 (0:00:07.174) 0:09:21.830 ******* 2026-02-16 05:25:34.595523 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:25:34.595529 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:25:34.595535 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:25:34.595542 | orchestrator | 2026-02-16 05:25:34.595623 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-16 05:25:34.595646 | orchestrator | Monday 16 February 2026 05:24:06 +0000 (0:00:07.654) 0:09:29.485 ******* 2026-02-16 05:25:34.595657 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:25:34.595667 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:25:34.595677 | orchestrator | 2026-02-16 05:25:34.595689 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-16 05:25:34.595700 | orchestrator | Monday 16 February 2026 05:24:10 +0000 (0:00:03.734) 0:09:33.220 ******* 2026-02-16 05:25:34.595712 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:25:34.595723 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:25:34.595736 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:25:34.595747 | orchestrator | 2026-02-16 05:25:34.595759 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-16 05:25:34.595771 | orchestrator | Monday 16 February 2026 05:24:24 +0000 (0:00:13.773) 0:09:46.994 ******* 2026-02-16 05:25:34.595783 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:25:34.595794 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:25:34.595800 | orchestrator | 2026-02-16 05:25:34.595807 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-16 05:25:34.595813 | orchestrator | Monday 16 February 2026 05:24:27 +0000 (0:00:03.712) 0:09:50.706 ******* 2026-02-16 05:25:34.595819 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:25:34.595831 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:25:34.595838 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:25:34.595844 | orchestrator | 2026-02-16 05:25:34.595850 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-16 05:25:34.595856 | orchestrator | Monday 16 February 2026 05:24:34 +0000 (0:00:07.070) 0:09:57.777 ******* 2026-02-16 05:25:34.595862 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:25:34.595868 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:25:34.595874 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:25:34.595881 | orchestrator | 2026-02-16 05:25:34.595887 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-16 05:25:34.595893 | orchestrator | Monday 16 February 2026 05:24:41 +0000 (0:00:06.906) 0:10:04.684 ******* 2026-02-16 05:25:34.595899 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:25:34.595905 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:25:34.595911 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:25:34.595917 | orchestrator | 2026-02-16 05:25:34.595923 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-16 05:25:34.595930 | orchestrator | Monday 16 February 2026 05:24:48 +0000 (0:00:06.865) 0:10:11.549 ******* 2026-02-16 05:25:34.595936 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:25:34.595942 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:25:34.595948 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:25:34.595954 | orchestrator | 2026-02-16 05:25:34.595960 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-16 05:25:34.595966 | orchestrator | Monday 16 February 2026 05:24:55 +0000 (0:00:06.904) 0:10:18.454 ******* 2026-02-16 05:25:34.595972 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:25:34.595979 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:25:34.595985 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:25:34.595991 | orchestrator | 2026-02-16 05:25:34.595997 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-02-16 05:25:34.596003 | orchestrator | Monday 16 February 2026 05:25:02 +0000 (0:00:07.333) 0:10:25.788 ******* 2026-02-16 05:25:34.596010 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:25:34.596016 | orchestrator | 2026-02-16 05:25:34.596022 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-16 05:25:34.596028 | orchestrator | Monday 16 February 2026 05:25:06 +0000 (0:00:03.602) 0:10:29.390 ******* 2026-02-16 05:25:34.596034 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:25:34.596040 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:25:34.596047 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:25:34.596053 | orchestrator | 2026-02-16 05:25:34.596065 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-02-16 05:25:34.596071 | orchestrator | Monday 16 February 2026 05:25:19 +0000 (0:00:12.625) 0:10:42.016 ******* 2026-02-16 05:25:34.596077 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:25:34.596084 | orchestrator | 2026-02-16 05:25:34.596090 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-16 05:25:34.596096 | orchestrator | Monday 16 February 2026 05:25:22 +0000 (0:00:03.624) 0:10:45.641 ******* 2026-02-16 05:25:34.596102 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:25:34.596108 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:25:34.596114 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:25:34.596120 | orchestrator | 2026-02-16 05:25:34.596127 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-16 05:25:34.596133 | orchestrator | Monday 16 February 2026 05:25:29 +0000 (0:00:06.958) 0:10:52.600 ******* 2026-02-16 05:25:34.596139 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:25:34.596145 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:25:34.596151 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:25:34.596157 | orchestrator | 2026-02-16 05:25:34.596163 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-16 05:25:34.596170 | orchestrator | Monday 16 February 2026 05:25:31 +0000 (0:00:02.064) 0:10:54.664 ******* 2026-02-16 05:25:34.596176 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:25:34.596182 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:25:34.596188 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:25:34.596194 | orchestrator | 2026-02-16 05:25:34.596200 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 05:25:34.596208 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-16 05:25:34.596216 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-16 05:25:34.596229 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-16 05:25:35.470434 | orchestrator | 2026-02-16 05:25:35.470563 | orchestrator | 2026-02-16 05:25:35.470635 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 05:25:35.470650 | orchestrator | Monday 16 February 2026 05:25:34 +0000 (0:00:02.727) 0:10:57.392 ******* 2026-02-16 05:25:35.470662 | orchestrator | =============================================================================== 2026-02-16 05:25:35.470673 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.77s 2026-02-16 05:25:35.470685 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 12.63s 2026-02-16 05:25:35.470696 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 7.65s 2026-02-16 05:25:35.470706 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.47s 2026-02-16 05:25:35.470717 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.33s 2026-02-16 05:25:35.470728 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.33s 2026-02-16 05:25:35.470739 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.17s 2026-02-16 05:25:35.470749 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.07s 2026-02-16 05:25:35.470781 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.06s 2026-02-16 05:25:35.470792 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.98s 2026-02-16 05:25:35.470803 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 6.96s 2026-02-16 05:25:35.470814 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.91s 2026-02-16 05:25:35.470825 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.90s 2026-02-16 05:25:35.470860 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.89s 2026-02-16 05:25:35.470872 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.87s 2026-02-16 05:25:35.470882 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.05s 2026-02-16 05:25:35.470893 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.74s 2026-02-16 05:25:35.470904 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.64s 2026-02-16 05:25:35.470914 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.62s 2026-02-16 05:25:35.470925 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 5.57s 2026-02-16 05:25:35.766915 | orchestrator | + osism apply -a upgrade opensearch 2026-02-16 05:25:37.806394 | orchestrator | 2026-02-16 05:25:37 | INFO  | Task cff6f2e6-6cae-48c1-9732-0b7dd329702d (opensearch) was prepared for execution. 2026-02-16 05:25:37.806511 | orchestrator | 2026-02-16 05:25:37 | INFO  | It takes a moment until task cff6f2e6-6cae-48c1-9732-0b7dd329702d (opensearch) has been started and output is visible here. 2026-02-16 05:25:54.697673 | orchestrator | 2026-02-16 05:25:54.697785 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 05:25:54.697805 | orchestrator | 2026-02-16 05:25:54.697818 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 05:25:54.697831 | orchestrator | Monday 16 February 2026 05:25:43 +0000 (0:00:01.395) 0:00:01.395 ******* 2026-02-16 05:25:54.697856 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:25:54.697869 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:25:54.697881 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:25:54.697894 | orchestrator | 2026-02-16 05:25:54.697907 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 05:25:54.697920 | orchestrator | Monday 16 February 2026 05:25:44 +0000 (0:00:01.648) 0:00:03.044 ******* 2026-02-16 05:25:54.697931 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-16 05:25:54.697943 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-16 05:25:54.697956 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-16 05:25:54.697968 | orchestrator | 2026-02-16 05:25:54.697981 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-16 05:25:54.697994 | orchestrator | 2026-02-16 05:25:54.698007 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-16 05:25:54.698072 | orchestrator | Monday 16 February 2026 05:25:46 +0000 (0:00:01.970) 0:00:05.014 ******* 2026-02-16 05:25:54.698082 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:25:54.698090 | orchestrator | 2026-02-16 05:25:54.698097 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-16 05:25:54.698104 | orchestrator | Monday 16 February 2026 05:25:48 +0000 (0:00:02.070) 0:00:07.085 ******* 2026-02-16 05:25:54.698112 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-16 05:25:54.698120 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-16 05:25:54.698127 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-16 05:25:54.698134 | orchestrator | 2026-02-16 05:25:54.698142 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-16 05:25:54.698149 | orchestrator | Monday 16 February 2026 05:25:50 +0000 (0:00:02.021) 0:00:09.106 ******* 2026-02-16 05:25:54.698159 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:25:54.698207 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:25:54.698234 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:25:54.698245 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-16 05:25:54.698256 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-16 05:25:54.698275 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-16 05:25:54.698284 | orchestrator | 2026-02-16 05:25:54.698293 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-16 05:25:54.698302 | orchestrator | Monday 16 February 2026 05:25:53 +0000 (0:00:02.325) 0:00:11.432 ******* 2026-02-16 05:25:54.698310 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:25:54.698319 | orchestrator | 2026-02-16 05:25:54.698332 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-16 05:26:00.191742 | orchestrator | Monday 16 February 2026 05:25:54 +0000 (0:00:01.602) 0:00:13.034 ******* 2026-02-16 05:26:00.191829 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:26:00.191842 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:26:00.191878 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:26:00.191887 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-16 05:26:00.191907 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-16 05:26:00.191915 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-16 05:26:00.191937 | orchestrator | 2026-02-16 05:26:00.191945 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-16 05:26:00.192014 | orchestrator | Monday 16 February 2026 05:25:58 +0000 (0:00:03.568) 0:00:16.603 ******* 2026-02-16 05:26:00.192027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:26:00.192042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-16 05:26:02.094529 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:26:02.094651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:26:02.094681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-16 05:26:02.094687 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:26:02.094701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:26:02.094716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-16 05:26:02.094721 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:26:02.094725 | orchestrator | 2026-02-16 05:26:02.094730 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-16 05:26:02.094735 | orchestrator | Monday 16 February 2026 05:26:00 +0000 (0:00:01.930) 0:00:18.533 ******* 2026-02-16 05:26:02.094739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:26:02.094754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-16 05:26:02.094759 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:26:02.094763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:26:02.094771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:26:05.856637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-16 05:26:05.856753 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:26:05.856778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-16 05:26:05.856786 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:26:05.856792 | orchestrator | 2026-02-16 05:26:05.856799 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-16 05:26:05.856806 | orchestrator | Monday 16 February 2026 05:26:02 +0000 (0:00:01.897) 0:00:20.430 ******* 2026-02-16 05:26:05.856812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:26:05.856834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:26:05.856846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:26:05.856856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-16 05:26:05.856863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-16 05:26:05.856876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-16 05:26:19.788099 | orchestrator | 2026-02-16 05:26:19.788206 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-16 05:26:19.788221 | orchestrator | Monday 16 February 2026 05:26:05 +0000 (0:00:03.763) 0:00:24.194 ******* 2026-02-16 05:26:19.788231 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:26:19.788242 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:26:19.788251 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:26:19.788260 | orchestrator | 2026-02-16 05:26:19.788269 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-16 05:26:19.788278 | orchestrator | Monday 16 February 2026 05:26:09 +0000 (0:00:03.447) 0:00:27.642 ******* 2026-02-16 05:26:19.788287 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:26:19.788296 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:26:19.788304 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:26:19.788313 | orchestrator | 2026-02-16 05:26:19.788322 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-02-16 05:26:19.788330 | orchestrator | Monday 16 February 2026 05:26:12 +0000 (0:00:03.358) 0:00:31.000 ******* 2026-02-16 05:26:19.788357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:26:19.788370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:26:19.788380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-16 05:26:19.788428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-16 05:26:19.788453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-16 05:26:19.788464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-16 05:26:19.788481 | orchestrator | 2026-02-16 05:26:19.788491 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-02-16 05:26:19.788501 | orchestrator | Monday 16 February 2026 05:26:16 +0000 (0:00:03.690) 0:00:34.690 ******* 2026-02-16 05:26:19.788510 | orchestrator | changed: [testbed-node-0] => { 2026-02-16 05:26:19.788519 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:26:19.788528 | orchestrator | } 2026-02-16 05:26:19.788537 | orchestrator | changed: [testbed-node-1] => { 2026-02-16 05:26:19.788545 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:26:19.788554 | orchestrator | } 2026-02-16 05:26:19.788562 | orchestrator | changed: [testbed-node-2] => { 2026-02-16 05:26:19.788571 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:26:19.788634 | orchestrator | } 2026-02-16 05:26:19.788645 | orchestrator | 2026-02-16 05:26:19.788656 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-16 05:26:19.788665 | orchestrator | Monday 16 February 2026 05:26:17 +0000 (0:00:01.375) 0:00:36.066 ******* 2026-02-16 05:26:19.788684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:29:32.244104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-16 05:29:32.244236 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:29:32.244253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:29:32.244282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-16 05:29:32.244292 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:29:32.244317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-16 05:29:32.244333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-16 05:29:32.244342 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:29:32.244350 | orchestrator | 2026-02-16 05:29:32.244360 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-16 05:29:32.244369 | orchestrator | Monday 16 February 2026 05:26:19 +0000 (0:00:02.061) 0:00:38.128 ******* 2026-02-16 05:29:32.244388 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:29:32.244396 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:29:32.244404 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:29:32.244412 | orchestrator | 2026-02-16 05:29:32.244420 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-16 05:29:32.244429 | orchestrator | Monday 16 February 2026 05:26:21 +0000 (0:00:01.620) 0:00:39.749 ******* 2026-02-16 05:29:32.244437 | orchestrator | 2026-02-16 05:29:32.244445 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-16 05:29:32.244453 | orchestrator | Monday 16 February 2026 05:26:21 +0000 (0:00:00.448) 0:00:40.197 ******* 2026-02-16 05:29:32.244460 | orchestrator | 2026-02-16 05:29:32.244468 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-16 05:29:32.244476 | orchestrator | Monday 16 February 2026 05:26:22 +0000 (0:00:00.432) 0:00:40.630 ******* 2026-02-16 05:29:32.244483 | orchestrator | 2026-02-16 05:29:32.244491 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-16 05:29:32.244499 | orchestrator | Monday 16 February 2026 05:26:23 +0000 (0:00:00.787) 0:00:41.418 ******* 2026-02-16 05:29:32.244507 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:29:32.244515 | orchestrator | 2026-02-16 05:29:32.244523 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-16 05:29:32.244530 | orchestrator | Monday 16 February 2026 05:26:27 +0000 (0:00:03.942) 0:00:45.360 ******* 2026-02-16 05:29:32.244538 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:29:32.244572 | orchestrator | 2026-02-16 05:29:32.244581 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-16 05:29:32.244589 | orchestrator | Monday 16 February 2026 05:26:34 +0000 (0:00:07.854) 0:00:53.215 ******* 2026-02-16 05:29:32.244597 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:29:32.244604 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:29:32.244612 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:29:32.244620 | orchestrator | 2026-02-16 05:29:32.244629 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-16 05:29:32.244639 | orchestrator | Monday 16 February 2026 05:27:46 +0000 (0:01:11.892) 0:02:05.107 ******* 2026-02-16 05:29:32.244653 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:29:32.244666 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:29:32.244680 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:29:32.244693 | orchestrator | 2026-02-16 05:29:32.244707 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-16 05:29:32.244723 | orchestrator | Monday 16 February 2026 05:29:22 +0000 (0:01:35.418) 0:03:40.525 ******* 2026-02-16 05:29:32.244738 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:29:32.244752 | orchestrator | 2026-02-16 05:29:32.244761 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-16 05:29:32.244770 | orchestrator | Monday 16 February 2026 05:29:23 +0000 (0:00:01.810) 0:03:42.336 ******* 2026-02-16 05:29:32.244779 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:29:32.244788 | orchestrator | 2026-02-16 05:29:32.244797 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-16 05:29:32.244806 | orchestrator | Monday 16 February 2026 05:29:27 +0000 (0:00:03.350) 0:03:45.686 ******* 2026-02-16 05:29:32.244815 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:29:32.244824 | orchestrator | 2026-02-16 05:29:32.244833 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-16 05:29:32.244842 | orchestrator | Monday 16 February 2026 05:29:30 +0000 (0:00:03.640) 0:03:49.327 ******* 2026-02-16 05:29:32.244851 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:29:32.244860 | orchestrator | 2026-02-16 05:29:32.244869 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-16 05:29:32.244884 | orchestrator | Monday 16 February 2026 05:29:32 +0000 (0:00:01.250) 0:03:50.578 ******* 2026-02-16 05:29:34.984399 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:29:34.984504 | orchestrator | 2026-02-16 05:29:34.984520 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 05:29:34.984535 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 05:29:34.984593 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-16 05:29:34.984606 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-16 05:29:34.984617 | orchestrator | 2026-02-16 05:29:34.984629 | orchestrator | 2026-02-16 05:29:34.984640 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 05:29:34.984651 | orchestrator | Monday 16 February 2026 05:29:34 +0000 (0:00:02.323) 0:03:52.901 ******* 2026-02-16 05:29:34.984661 | orchestrator | =============================================================================== 2026-02-16 05:29:34.984672 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 95.42s 2026-02-16 05:29:34.984683 | orchestrator | opensearch : Restart opensearch container ------------------------------ 71.89s 2026-02-16 05:29:34.984710 | orchestrator | opensearch : Perform a flush -------------------------------------------- 7.85s 2026-02-16 05:29:34.984721 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.94s 2026-02-16 05:29:34.984732 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.76s 2026-02-16 05:29:34.984742 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.69s 2026-02-16 05:29:34.984753 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.64s 2026-02-16 05:29:34.984763 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.57s 2026-02-16 05:29:34.984774 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.45s 2026-02-16 05:29:34.984784 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 3.36s 2026-02-16 05:29:34.984795 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.35s 2026-02-16 05:29:34.984805 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.33s 2026-02-16 05:29:34.984816 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.32s 2026-02-16 05:29:34.984827 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.07s 2026-02-16 05:29:34.984837 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.06s 2026-02-16 05:29:34.984848 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.02s 2026-02-16 05:29:34.984859 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.97s 2026-02-16 05:29:34.984869 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.93s 2026-02-16 05:29:34.984880 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.90s 2026-02-16 05:29:34.984891 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.81s 2026-02-16 05:29:35.384621 | orchestrator | + osism apply -a upgrade memcached 2026-02-16 05:29:37.519415 | orchestrator | 2026-02-16 05:29:37 | INFO  | Task 87e35078-5806-410b-b772-4d05df8bb4bd (memcached) was prepared for execution. 2026-02-16 05:29:37.519536 | orchestrator | 2026-02-16 05:29:37 | INFO  | It takes a moment until task 87e35078-5806-410b-b772-4d05df8bb4bd (memcached) has been started and output is visible here. 2026-02-16 05:30:01.481214 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-16 05:30:01.481298 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-16 05:30:01.481333 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-16 05:30:01.481340 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-16 05:30:01.481354 | orchestrator | 2026-02-16 05:30:01.481362 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 05:30:01.481368 | orchestrator | 2026-02-16 05:30:01.481375 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 05:30:01.481382 | orchestrator | Monday 16 February 2026 05:29:43 +0000 (0:00:01.493) 0:00:01.493 ******* 2026-02-16 05:30:01.481388 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:30:01.481395 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:30:01.481401 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:30:01.481408 | orchestrator | 2026-02-16 05:30:01.481414 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 05:30:01.481420 | orchestrator | Monday 16 February 2026 05:29:44 +0000 (0:00:00.676) 0:00:02.170 ******* 2026-02-16 05:30:01.481426 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-16 05:30:01.481432 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-16 05:30:01.481439 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-16 05:30:01.481445 | orchestrator | 2026-02-16 05:30:01.481451 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-16 05:30:01.481457 | orchestrator | 2026-02-16 05:30:01.481463 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-16 05:30:01.481469 | orchestrator | Monday 16 February 2026 05:29:44 +0000 (0:00:00.834) 0:00:03.005 ******* 2026-02-16 05:30:01.481476 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:30:01.481482 | orchestrator | 2026-02-16 05:30:01.481488 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-16 05:30:01.481494 | orchestrator | Monday 16 February 2026 05:29:45 +0000 (0:00:00.976) 0:00:03.981 ******* 2026-02-16 05:30:01.481501 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-16 05:30:01.481507 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-16 05:30:01.481513 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-16 05:30:01.481519 | orchestrator | 2026-02-16 05:30:01.481526 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-16 05:30:01.481601 | orchestrator | Monday 16 February 2026 05:29:46 +0000 (0:00:00.918) 0:00:04.900 ******* 2026-02-16 05:30:01.481626 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-16 05:30:01.481639 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-16 05:30:01.481649 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-16 05:30:01.481659 | orchestrator | 2026-02-16 05:30:01.481668 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-02-16 05:30:01.481694 | orchestrator | Monday 16 February 2026 05:29:48 +0000 (0:00:01.738) 0:00:06.639 ******* 2026-02-16 05:30:01.481708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-16 05:30:01.481722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-16 05:30:01.481761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-16 05:30:01.481773 | orchestrator | 2026-02-16 05:30:01.481785 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-02-16 05:30:01.481796 | orchestrator | Monday 16 February 2026 05:29:49 +0000 (0:00:01.173) 0:00:07.813 ******* 2026-02-16 05:30:01.481807 | orchestrator | changed: [testbed-node-0] => { 2026-02-16 05:30:01.481818 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:30:01.481829 | orchestrator | } 2026-02-16 05:30:01.481838 | orchestrator | changed: [testbed-node-1] => { 2026-02-16 05:30:01.481846 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:30:01.481853 | orchestrator | } 2026-02-16 05:30:01.481860 | orchestrator | changed: [testbed-node-2] => { 2026-02-16 05:30:01.481867 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:30:01.481874 | orchestrator | } 2026-02-16 05:30:01.481882 | orchestrator | 2026-02-16 05:30:01.481890 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-16 05:30:01.481897 | orchestrator | Monday 16 February 2026 05:29:50 +0000 (0:00:00.356) 0:00:08.169 ******* 2026-02-16 05:30:01.481905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-16 05:30:01.481918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-16 05:30:01.481930 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-16 05:30:01.481936 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-16 05:30:01.481948 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:30:01.481955 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:30:01.481961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-16 05:30:01.481968 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:30:01.481974 | orchestrator | 2026-02-16 05:30:01.481980 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-16 05:30:01.481986 | orchestrator | Monday 16 February 2026 05:29:51 +0000 (0:00:01.196) 0:00:09.365 ******* 2026-02-16 05:30:01.481992 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:30:01.481999 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:30:01.482009 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:30:01.832802 | orchestrator | 2026-02-16 05:30:01.832896 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 05:30:01.832910 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 05:30:01.832921 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 05:30:01.832930 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 05:30:01.832938 | orchestrator | 2026-02-16 05:30:01.832947 | orchestrator | 2026-02-16 05:30:01.832956 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 05:30:01.832965 | orchestrator | Monday 16 February 2026 05:30:01 +0000 (0:00:10.204) 0:00:19.570 ******* 2026-02-16 05:30:01.832974 | orchestrator | =============================================================================== 2026-02-16 05:30:01.832982 | orchestrator | memcached : Restart memcached container -------------------------------- 10.20s 2026-02-16 05:30:01.832991 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.74s 2026-02-16 05:30:01.833002 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.20s 2026-02-16 05:30:01.833015 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.17s 2026-02-16 05:30:01.833030 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.98s 2026-02-16 05:30:01.833044 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.92s 2026-02-16 05:30:01.833058 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.83s 2026-02-16 05:30:01.833073 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.68s 2026-02-16 05:30:01.833085 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.36s 2026-02-16 05:30:02.159468 | orchestrator | + osism apply -a upgrade redis 2026-02-16 05:30:04.206341 | orchestrator | 2026-02-16 05:30:04 | INFO  | Task f4c05718-3ceb-4a8a-afd1-b6e704c8f87a (redis) was prepared for execution. 2026-02-16 05:30:04.206434 | orchestrator | 2026-02-16 05:30:04 | INFO  | It takes a moment until task f4c05718-3ceb-4a8a-afd1-b6e704c8f87a (redis) has been started and output is visible here. 2026-02-16 05:30:22.114364 | orchestrator | 2026-02-16 05:30:22.114447 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 05:30:22.114455 | orchestrator | 2026-02-16 05:30:22.114460 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 05:30:22.114464 | orchestrator | Monday 16 February 2026 05:30:09 +0000 (0:00:01.349) 0:00:01.349 ******* 2026-02-16 05:30:22.114469 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:30:22.114473 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:30:22.114477 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:30:22.114481 | orchestrator | 2026-02-16 05:30:22.114497 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 05:30:22.114501 | orchestrator | Monday 16 February 2026 05:30:12 +0000 (0:00:02.176) 0:00:03.525 ******* 2026-02-16 05:30:22.114505 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-16 05:30:22.114510 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-16 05:30:22.114513 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-16 05:30:22.114517 | orchestrator | 2026-02-16 05:30:22.114521 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-16 05:30:22.114571 | orchestrator | 2026-02-16 05:30:22.114578 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-16 05:30:22.114584 | orchestrator | Monday 16 February 2026 05:30:14 +0000 (0:00:02.111) 0:00:05.636 ******* 2026-02-16 05:30:22.114591 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:30:22.114598 | orchestrator | 2026-02-16 05:30:22.114605 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-16 05:30:22.114611 | orchestrator | Monday 16 February 2026 05:30:16 +0000 (0:00:02.614) 0:00:08.251 ******* 2026-02-16 05:30:22.114618 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 05:30:22.114626 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 05:30:22.114630 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 05:30:22.114635 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 05:30:22.114665 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 05:30:22.114674 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 05:30:22.114678 | orchestrator | 2026-02-16 05:30:22.114682 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-16 05:30:22.114686 | orchestrator | Monday 16 February 2026 05:30:18 +0000 (0:00:02.157) 0:00:10.408 ******* 2026-02-16 05:30:22.114690 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 05:30:22.114694 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 05:30:22.114697 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 05:30:22.114705 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 05:30:22.114713 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 05:30:29.291462 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 05:30:29.291650 | orchestrator | 2026-02-16 05:30:29.291673 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-16 05:30:29.291687 | orchestrator | Monday 16 February 2026 05:30:22 +0000 (0:00:03.170) 0:00:13.579 ******* 2026-02-16 05:30:29.291700 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 05:30:29.291715 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 05:30:29.291726 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 05:30:29.291763 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 05:30:29.291776 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 05:30:29.291808 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 05:30:29.291820 | orchestrator | 2026-02-16 05:30:29.291831 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-02-16 05:30:29.291843 | orchestrator | Monday 16 February 2026 05:30:26 +0000 (0:00:04.081) 0:00:17.660 ******* 2026-02-16 05:30:29.291854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 05:30:29.291866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 05:30:29.291916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-16 05:30:29.291952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 05:30:29.291976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 05:30:29.292016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-16 05:30:56.648650 | orchestrator | 2026-02-16 05:30:56.648760 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-02-16 05:30:56.648777 | orchestrator | Monday 16 February 2026 05:30:29 +0000 (0:00:03.105) 0:00:20.766 ******* 2026-02-16 05:30:56.648790 | orchestrator | changed: [testbed-node-0] => { 2026-02-16 05:30:56.648802 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:30:56.648814 | orchestrator | } 2026-02-16 05:30:56.648825 | orchestrator | changed: [testbed-node-1] => { 2026-02-16 05:30:56.648836 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:30:56.648847 | orchestrator | } 2026-02-16 05:30:56.648858 | orchestrator | changed: [testbed-node-2] => { 2026-02-16 05:30:56.648868 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:30:56.648879 | orchestrator | } 2026-02-16 05:30:56.648890 | orchestrator | 2026-02-16 05:30:56.648901 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-16 05:30:56.648912 | orchestrator | Monday 16 February 2026 05:30:30 +0000 (0:00:01.596) 0:00:22.362 ******* 2026-02-16 05:30:56.648925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-16 05:30:56.648962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-16 05:30:56.648975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-16 05:30:56.648987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-16 05:30:56.648999 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:30:56.649011 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:30:56.649036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-16 05:30:56.649067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-16 05:30:56.649079 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:30:56.649090 | orchestrator | 2026-02-16 05:30:56.649101 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-16 05:30:56.649128 | orchestrator | Monday 16 February 2026 05:30:32 +0000 (0:00:01.912) 0:00:24.275 ******* 2026-02-16 05:30:56.649161 | orchestrator | 2026-02-16 05:30:56.649175 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-16 05:30:56.649187 | orchestrator | Monday 16 February 2026 05:30:33 +0000 (0:00:00.466) 0:00:24.742 ******* 2026-02-16 05:30:56.649199 | orchestrator | 2026-02-16 05:30:56.649232 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-16 05:30:56.649245 | orchestrator | Monday 16 February 2026 05:30:33 +0000 (0:00:00.435) 0:00:25.178 ******* 2026-02-16 05:30:56.649258 | orchestrator | 2026-02-16 05:30:56.649270 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-16 05:30:56.649282 | orchestrator | Monday 16 February 2026 05:30:34 +0000 (0:00:00.791) 0:00:25.969 ******* 2026-02-16 05:30:56.649294 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:30:56.649306 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:30:56.649318 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:30:56.649331 | orchestrator | 2026-02-16 05:30:56.649343 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-16 05:30:56.649355 | orchestrator | Monday 16 February 2026 05:30:45 +0000 (0:00:10.906) 0:00:36.875 ******* 2026-02-16 05:30:56.649368 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:30:56.649379 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:30:56.649391 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:30:56.649403 | orchestrator | 2026-02-16 05:30:56.649415 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 05:30:56.649429 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 05:30:56.649444 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 05:30:56.649456 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 05:30:56.649469 | orchestrator | 2026-02-16 05:30:56.649482 | orchestrator | 2026-02-16 05:30:56.649495 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 05:30:56.649507 | orchestrator | Monday 16 February 2026 05:30:56 +0000 (0:00:10.850) 0:00:47.726 ******* 2026-02-16 05:30:56.649554 | orchestrator | =============================================================================== 2026-02-16 05:30:56.649565 | orchestrator | redis : Restart redis container ---------------------------------------- 10.91s 2026-02-16 05:30:56.649576 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.85s 2026-02-16 05:30:56.649587 | orchestrator | redis : Copying over redis config files --------------------------------- 4.08s 2026-02-16 05:30:56.649598 | orchestrator | redis : Copying over default config.json files -------------------------- 3.17s 2026-02-16 05:30:56.649608 | orchestrator | service-check-containers : redis | Check containers --------------------- 3.11s 2026-02-16 05:30:56.649619 | orchestrator | redis : include_tasks --------------------------------------------------- 2.61s 2026-02-16 05:30:56.649629 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.18s 2026-02-16 05:30:56.649640 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.16s 2026-02-16 05:30:56.649650 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.11s 2026-02-16 05:30:56.649661 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.91s 2026-02-16 05:30:56.649672 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.69s 2026-02-16 05:30:56.649682 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.60s 2026-02-16 05:30:56.937363 | orchestrator | + osism apply -a upgrade mariadb 2026-02-16 05:30:58.983286 | orchestrator | 2026-02-16 05:30:58 | INFO  | Task 51123109-b776-4b7c-b37a-3660d170aa5e (mariadb) was prepared for execution. 2026-02-16 05:30:58.983375 | orchestrator | 2026-02-16 05:30:58 | INFO  | It takes a moment until task 51123109-b776-4b7c-b37a-3660d170aa5e (mariadb) has been started and output is visible here. 2026-02-16 05:31:23.527032 | orchestrator | 2026-02-16 05:31:23.527205 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 05:31:23.527236 | orchestrator | 2026-02-16 05:31:23.527279 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 05:31:23.527312 | orchestrator | Monday 16 February 2026 05:31:04 +0000 (0:00:01.550) 0:00:01.550 ******* 2026-02-16 05:31:23.527334 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:31:23.527353 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:31:23.527372 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:31:23.527391 | orchestrator | 2026-02-16 05:31:23.527410 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 05:31:23.527429 | orchestrator | Monday 16 February 2026 05:31:06 +0000 (0:00:01.899) 0:00:03.450 ******* 2026-02-16 05:31:23.527449 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-16 05:31:23.527468 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-16 05:31:23.527486 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-16 05:31:23.527533 | orchestrator | 2026-02-16 05:31:23.527553 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-16 05:31:23.527572 | orchestrator | 2026-02-16 05:31:23.527592 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-16 05:31:23.527610 | orchestrator | Monday 16 February 2026 05:31:08 +0000 (0:00:01.713) 0:00:05.164 ******* 2026-02-16 05:31:23.527629 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 05:31:23.527648 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-16 05:31:23.527666 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-16 05:31:23.527685 | orchestrator | 2026-02-16 05:31:23.527705 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-16 05:31:23.527724 | orchestrator | Monday 16 February 2026 05:31:09 +0000 (0:00:01.453) 0:00:06.617 ******* 2026-02-16 05:31:23.527744 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:31:23.527763 | orchestrator | 2026-02-16 05:31:23.527781 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-16 05:31:23.527801 | orchestrator | Monday 16 February 2026 05:31:11 +0000 (0:00:01.892) 0:00:08.509 ******* 2026-02-16 05:31:23.527828 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-16 05:31:23.527922 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-16 05:31:23.527951 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-16 05:31:23.527971 | orchestrator | 2026-02-16 05:31:23.527990 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-16 05:31:23.528022 | orchestrator | Monday 16 February 2026 05:31:15 +0000 (0:00:03.658) 0:00:12.167 ******* 2026-02-16 05:31:23.528043 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:31:23.528063 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:31:23.528081 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:31:23.528100 | orchestrator | 2026-02-16 05:31:23.528118 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-16 05:31:23.528137 | orchestrator | Monday 16 February 2026 05:31:16 +0000 (0:00:01.572) 0:00:13.740 ******* 2026-02-16 05:31:23.528157 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:31:23.528176 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:31:23.528194 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:31:23.528214 | orchestrator | 2026-02-16 05:31:23.528231 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-16 05:31:23.528250 | orchestrator | Monday 16 February 2026 05:31:19 +0000 (0:00:02.124) 0:00:15.864 ******* 2026-02-16 05:31:23.528293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-16 05:31:36.091549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-16 05:31:36.091681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-16 05:31:36.091692 | orchestrator | 2026-02-16 05:31:36.091700 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-16 05:31:36.091709 | orchestrator | Monday 16 February 2026 05:31:23 +0000 (0:00:04.410) 0:00:20.275 ******* 2026-02-16 05:31:36.091716 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:31:36.091724 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:31:36.091731 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:31:36.091739 | orchestrator | 2026-02-16 05:31:36.091746 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-16 05:31:36.091764 | orchestrator | Monday 16 February 2026 05:31:25 +0000 (0:00:02.059) 0:00:22.335 ******* 2026-02-16 05:31:36.091771 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:31:36.091778 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:31:36.091785 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:31:36.091792 | orchestrator | 2026-02-16 05:31:36.091799 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-16 05:31:36.091806 | orchestrator | Monday 16 February 2026 05:31:30 +0000 (0:00:05.089) 0:00:27.424 ******* 2026-02-16 05:31:36.091814 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:31:36.091821 | orchestrator | 2026-02-16 05:31:36.091828 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-16 05:31:36.091835 | orchestrator | Monday 16 February 2026 05:31:32 +0000 (0:00:01.937) 0:00:29.361 ******* 2026-02-16 05:31:36.091857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:31:36.091865 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:31:36.091880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:31:42.744443 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:31:42.744624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:31:42.744669 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:31:42.744684 | orchestrator | 2026-02-16 05:31:42.744697 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-16 05:31:42.744709 | orchestrator | Monday 16 February 2026 05:31:36 +0000 (0:00:03.481) 0:00:32.842 ******* 2026-02-16 05:31:42.744737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:31:42.744750 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:31:42.744782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:31:42.744804 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:31:42.744821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:31:42.744833 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:31:42.744844 | orchestrator | 2026-02-16 05:31:42.744856 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-16 05:31:42.744867 | orchestrator | Monday 16 February 2026 05:31:39 +0000 (0:00:02.956) 0:00:35.799 ******* 2026-02-16 05:31:42.744889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:31:46.898752 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:31:46.898884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:31:46.898915 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:31:46.898936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:31:46.898975 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:31:46.898986 | orchestrator | 2026-02-16 05:31:46.898998 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-02-16 05:31:46.899009 | orchestrator | Monday 16 February 2026 05:31:42 +0000 (0:00:03.696) 0:00:39.496 ******* 2026-02-16 05:31:46.899044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-16 05:31:46.899058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-16 05:31:46.899091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-16 05:32:00.948197 | orchestrator | 2026-02-16 05:32:00.948346 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-02-16 05:32:00.948376 | orchestrator | Monday 16 February 2026 05:31:46 +0000 (0:00:04.154) 0:00:43.651 ******* 2026-02-16 05:32:00.948396 | orchestrator | changed: [testbed-node-0] => { 2026-02-16 05:32:00.948415 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:32:00.948434 | orchestrator | } 2026-02-16 05:32:00.948555 | orchestrator | changed: [testbed-node-1] => { 2026-02-16 05:32:00.948571 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:32:00.948582 | orchestrator | } 2026-02-16 05:32:00.948593 | orchestrator | changed: [testbed-node-2] => { 2026-02-16 05:32:00.948604 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:32:00.948641 | orchestrator | } 2026-02-16 05:32:00.948653 | orchestrator | 2026-02-16 05:32:00.948665 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-16 05:32:00.948676 | orchestrator | Monday 16 February 2026 05:31:48 +0000 (0:00:01.410) 0:00:45.061 ******* 2026-02-16 05:32:00.948693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:32:00.948708 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:00.948762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:32:00.948806 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:00.948833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:32:00.948853 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:00.948872 | orchestrator | 2026-02-16 05:32:00.948891 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-02-16 05:32:00.948909 | orchestrator | Monday 16 February 2026 05:31:51 +0000 (0:00:03.603) 0:00:48.665 ******* 2026-02-16 05:32:00.948928 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:00.948948 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:00.948967 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:00.948989 | orchestrator | 2026-02-16 05:32:00.949012 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-02-16 05:32:00.949032 | orchestrator | Monday 16 February 2026 05:31:53 +0000 (0:00:01.261) 0:00:49.927 ******* 2026-02-16 05:32:00.949047 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:00.949058 | orchestrator | 2026-02-16 05:32:00.949069 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-02-16 05:32:00.949080 | orchestrator | Monday 16 February 2026 05:31:54 +0000 (0:00:01.086) 0:00:51.014 ******* 2026-02-16 05:32:00.949090 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:00.949101 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:00.949112 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:00.949122 | orchestrator | 2026-02-16 05:32:00.949133 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-02-16 05:32:00.949144 | orchestrator | Monday 16 February 2026 05:31:55 +0000 (0:00:01.318) 0:00:52.332 ******* 2026-02-16 05:32:00.949158 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:00.949175 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:00.949193 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:00.949211 | orchestrator | 2026-02-16 05:32:00.949229 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-02-16 05:32:00.949246 | orchestrator | Monday 16 February 2026 05:31:56 +0000 (0:00:01.418) 0:00:53.751 ******* 2026-02-16 05:32:00.949258 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:00.949269 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:00.949289 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:00.949300 | orchestrator | 2026-02-16 05:32:00.949311 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-02-16 05:32:00.949322 | orchestrator | Monday 16 February 2026 05:31:58 +0000 (0:00:01.312) 0:00:55.063 ******* 2026-02-16 05:32:00.949332 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:00.949343 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:00.949354 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:00.949364 | orchestrator | 2026-02-16 05:32:00.949375 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-02-16 05:32:00.949393 | orchestrator | Monday 16 February 2026 05:31:59 +0000 (0:00:01.295) 0:00:56.359 ******* 2026-02-16 05:32:00.949404 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:00.949415 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:00.949426 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:00.949437 | orchestrator | 2026-02-16 05:32:00.949488 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-02-16 05:32:18.835860 | orchestrator | Monday 16 February 2026 05:32:00 +0000 (0:00:01.337) 0:00:57.696 ******* 2026-02-16 05:32:18.835975 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:18.835991 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:18.835997 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:18.836003 | orchestrator | 2026-02-16 05:32:18.836010 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-02-16 05:32:18.836016 | orchestrator | Monday 16 February 2026 05:32:02 +0000 (0:00:01.576) 0:00:59.272 ******* 2026-02-16 05:32:18.836022 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-16 05:32:18.836028 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-16 05:32:18.836034 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-16 05:32:18.836039 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:18.836045 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-16 05:32:18.836050 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-16 05:32:18.836056 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-16 05:32:18.836061 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:18.836066 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-16 05:32:18.836072 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-16 05:32:18.836077 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-16 05:32:18.836082 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:18.836088 | orchestrator | 2026-02-16 05:32:18.836094 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-02-16 05:32:18.836099 | orchestrator | Monday 16 February 2026 05:32:03 +0000 (0:00:01.397) 0:01:00.670 ******* 2026-02-16 05:32:18.836104 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:18.836110 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:18.836115 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:18.836120 | orchestrator | 2026-02-16 05:32:18.836126 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-02-16 05:32:18.836131 | orchestrator | Monday 16 February 2026 05:32:05 +0000 (0:00:01.416) 0:01:02.087 ******* 2026-02-16 05:32:18.836137 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:18.836142 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:18.836147 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:18.836153 | orchestrator | 2026-02-16 05:32:18.836158 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-02-16 05:32:18.836163 | orchestrator | Monday 16 February 2026 05:32:06 +0000 (0:00:01.520) 0:01:03.608 ******* 2026-02-16 05:32:18.836169 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:18.836174 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:18.836179 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:18.836185 | orchestrator | 2026-02-16 05:32:18.836209 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-02-16 05:32:18.836215 | orchestrator | Monday 16 February 2026 05:32:08 +0000 (0:00:01.362) 0:01:04.970 ******* 2026-02-16 05:32:18.836221 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:18.836226 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:18.836231 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:18.836237 | orchestrator | 2026-02-16 05:32:18.836242 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-02-16 05:32:18.836248 | orchestrator | Monday 16 February 2026 05:32:09 +0000 (0:00:01.339) 0:01:06.310 ******* 2026-02-16 05:32:18.836253 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:18.836258 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:18.836263 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:18.836269 | orchestrator | 2026-02-16 05:32:18.836274 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-02-16 05:32:18.836279 | orchestrator | Monday 16 February 2026 05:32:10 +0000 (0:00:01.326) 0:01:07.636 ******* 2026-02-16 05:32:18.836285 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:18.836290 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:18.836295 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:18.836301 | orchestrator | 2026-02-16 05:32:18.836306 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-02-16 05:32:18.836311 | orchestrator | Monday 16 February 2026 05:32:12 +0000 (0:00:01.633) 0:01:09.269 ******* 2026-02-16 05:32:18.836317 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:18.836322 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:18.836327 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:18.836332 | orchestrator | 2026-02-16 05:32:18.836338 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-02-16 05:32:18.836343 | orchestrator | Monday 16 February 2026 05:32:13 +0000 (0:00:01.355) 0:01:10.625 ******* 2026-02-16 05:32:18.836348 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:18.836354 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:18.836359 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:18.836364 | orchestrator | 2026-02-16 05:32:18.836370 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-02-16 05:32:18.836375 | orchestrator | Monday 16 February 2026 05:32:15 +0000 (0:00:01.424) 0:01:12.050 ******* 2026-02-16 05:32:18.836411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:32:18.836426 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:18.836455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:32:18.836462 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:18.836479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:32:35.340906 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:35.341069 | orchestrator | 2026-02-16 05:32:35.341101 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-02-16 05:32:35.341130 | orchestrator | Monday 16 February 2026 05:32:18 +0000 (0:00:03.531) 0:01:15.581 ******* 2026-02-16 05:32:35.341156 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:35.341180 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:35.341206 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:35.341225 | orchestrator | 2026-02-16 05:32:35.341243 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-02-16 05:32:35.341260 | orchestrator | Monday 16 February 2026 05:32:20 +0000 (0:00:01.560) 0:01:17.141 ******* 2026-02-16 05:32:35.341285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:32:35.341309 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:35.341370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:32:35.341410 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:35.341493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-16 05:32:35.341517 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:35.341536 | orchestrator | 2026-02-16 05:32:35.341550 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-02-16 05:32:35.341563 | orchestrator | Monday 16 February 2026 05:32:23 +0000 (0:00:03.365) 0:01:20.507 ******* 2026-02-16 05:32:35.341575 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:35.341587 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:35.341599 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:35.341612 | orchestrator | 2026-02-16 05:32:35.341625 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-16 05:32:35.341638 | orchestrator | Monday 16 February 2026 05:32:25 +0000 (0:00:01.709) 0:01:22.216 ******* 2026-02-16 05:32:35.341650 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:35.341662 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:35.341675 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:35.341688 | orchestrator | 2026-02-16 05:32:35.341700 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-16 05:32:35.341713 | orchestrator | Monday 16 February 2026 05:32:26 +0000 (0:00:01.328) 0:01:23.544 ******* 2026-02-16 05:32:35.341726 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:35.341738 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:35.341750 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:35.341762 | orchestrator | 2026-02-16 05:32:35.341775 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-16 05:32:35.341788 | orchestrator | Monday 16 February 2026 05:32:28 +0000 (0:00:01.380) 0:01:24.925 ******* 2026-02-16 05:32:35.341815 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:35.341826 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:35.341837 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:35.341848 | orchestrator | 2026-02-16 05:32:35.341859 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-16 05:32:35.341870 | orchestrator | Monday 16 February 2026 05:32:29 +0000 (0:00:01.665) 0:01:26.590 ******* 2026-02-16 05:32:35.341880 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:32:35.341891 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:32:35.341901 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:32:35.341912 | orchestrator | 2026-02-16 05:32:35.341922 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-16 05:32:35.341933 | orchestrator | Monday 16 February 2026 05:32:31 +0000 (0:00:02.003) 0:01:28.594 ******* 2026-02-16 05:32:35.341944 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:32:35.341956 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:32:35.341967 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:32:35.341977 | orchestrator | 2026-02-16 05:32:35.341988 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-16 05:32:35.341999 | orchestrator | Monday 16 February 2026 05:32:33 +0000 (0:00:01.931) 0:01:30.525 ******* 2026-02-16 05:32:35.342009 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:32:35.342089 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:32:35.342104 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:32:35.342115 | orchestrator | 2026-02-16 05:32:35.342125 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-16 05:32:35.342136 | orchestrator | Monday 16 February 2026 05:32:35 +0000 (0:00:01.388) 0:01:31.913 ******* 2026-02-16 05:32:35.342158 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:35:12.555275 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:35:12.555424 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:35:12.555443 | orchestrator | 2026-02-16 05:35:12.555456 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-16 05:35:12.555469 | orchestrator | Monday 16 February 2026 05:32:36 +0000 (0:00:01.299) 0:01:33.212 ******* 2026-02-16 05:35:12.555481 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:35:12.555494 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:35:12.555506 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:35:12.555517 | orchestrator | 2026-02-16 05:35:12.555524 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-16 05:35:12.555531 | orchestrator | Monday 16 February 2026 05:32:38 +0000 (0:00:01.876) 0:01:35.089 ******* 2026-02-16 05:35:12.555537 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:35:12.555543 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:35:12.555550 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:35:12.555556 | orchestrator | 2026-02-16 05:35:12.555562 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-16 05:35:12.555569 | orchestrator | Monday 16 February 2026 05:32:39 +0000 (0:00:01.283) 0:01:36.372 ******* 2026-02-16 05:35:12.555575 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:35:12.555582 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:35:12.555589 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:35:12.555595 | orchestrator | 2026-02-16 05:35:12.555601 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-16 05:35:12.555607 | orchestrator | Monday 16 February 2026 05:32:40 +0000 (0:00:01.357) 0:01:37.730 ******* 2026-02-16 05:35:12.555613 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:35:12.555619 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:35:12.555625 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:35:12.555631 | orchestrator | 2026-02-16 05:35:12.555637 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-16 05:35:12.555643 | orchestrator | Monday 16 February 2026 05:32:44 +0000 (0:00:03.792) 0:01:41.522 ******* 2026-02-16 05:35:12.555649 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:35:12.555655 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:35:12.555683 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:35:12.555689 | orchestrator | 2026-02-16 05:35:12.555696 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-16 05:35:12.555702 | orchestrator | Monday 16 February 2026 05:32:46 +0000 (0:00:01.431) 0:01:42.954 ******* 2026-02-16 05:35:12.555720 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:35:12.555727 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:35:12.555733 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:35:12.555739 | orchestrator | 2026-02-16 05:35:12.555745 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-16 05:35:12.555752 | orchestrator | Monday 16 February 2026 05:32:47 +0000 (0:00:01.344) 0:01:44.299 ******* 2026-02-16 05:35:12.555758 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:35:12.555765 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:35:12.555771 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:35:12.555777 | orchestrator | 2026-02-16 05:35:12.555783 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-16 05:35:12.555789 | orchestrator | Monday 16 February 2026 05:32:49 +0000 (0:00:01.683) 0:01:45.982 ******* 2026-02-16 05:35:12.555795 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:35:12.555801 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:35:12.555808 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:35:12.555815 | orchestrator | 2026-02-16 05:35:12.555822 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-16 05:35:12.555829 | orchestrator | Monday 16 February 2026 05:32:50 +0000 (0:00:01.567) 0:01:47.550 ******* 2026-02-16 05:35:12.555836 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:35:12.555843 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:35:12.555850 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:35:12.555857 | orchestrator | 2026-02-16 05:35:12.555864 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-16 05:35:12.555871 | orchestrator | Monday 16 February 2026 05:32:52 +0000 (0:00:01.532) 0:01:49.082 ******* 2026-02-16 05:35:12.555878 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:35:12.555884 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:35:12.555891 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:35:12.555898 | orchestrator | 2026-02-16 05:35:12.555906 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-16 05:35:12.555912 | orchestrator | Monday 16 February 2026 05:32:53 +0000 (0:00:01.567) 0:01:50.649 ******* 2026-02-16 05:35:12.555919 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:35:12.555926 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:35:12.555945 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:35:12.555952 | orchestrator | 2026-02-16 05:35:12.555958 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-16 05:35:12.555965 | orchestrator | 2026-02-16 05:35:12.555973 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-16 05:35:12.555980 | orchestrator | Monday 16 February 2026 05:32:55 +0000 (0:00:02.001) 0:01:52.651 ******* 2026-02-16 05:35:12.555987 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:35:12.555994 | orchestrator | 2026-02-16 05:35:12.556001 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-16 05:35:12.556008 | orchestrator | Monday 16 February 2026 05:33:22 +0000 (0:00:26.791) 0:02:19.443 ******* 2026-02-16 05:35:12.556015 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:35:12.556022 | orchestrator | 2026-02-16 05:35:12.556029 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-16 05:35:12.556036 | orchestrator | Monday 16 February 2026 05:33:28 +0000 (0:00:05.658) 0:02:25.101 ******* 2026-02-16 05:35:12.556043 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:35:12.556050 | orchestrator | 2026-02-16 05:35:12.556056 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-16 05:35:12.556062 | orchestrator | 2026-02-16 05:35:12.556069 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-16 05:35:12.556080 | orchestrator | Monday 16 February 2026 05:33:31 +0000 (0:00:03.091) 0:02:28.193 ******* 2026-02-16 05:35:12.556086 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:35:12.556092 | orchestrator | 2026-02-16 05:35:12.556099 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-16 05:35:12.556121 | orchestrator | Monday 16 February 2026 05:33:57 +0000 (0:00:26.197) 0:02:54.390 ******* 2026-02-16 05:35:12.556127 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service port liveness (10 retries left). 2026-02-16 05:35:12.556134 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:35:12.556140 | orchestrator | 2026-02-16 05:35:12.556147 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-16 05:35:12.556153 | orchestrator | Monday 16 February 2026 05:34:05 +0000 (0:00:08.173) 0:03:02.564 ******* 2026-02-16 05:35:12.556159 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:35:12.556165 | orchestrator | 2026-02-16 05:35:12.556171 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-16 05:35:12.556177 | orchestrator | 2026-02-16 05:35:12.556183 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-16 05:35:12.556189 | orchestrator | Monday 16 February 2026 05:34:08 +0000 (0:00:02.999) 0:03:05.563 ******* 2026-02-16 05:35:12.556195 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:35:12.556201 | orchestrator | 2026-02-16 05:35:12.556207 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-16 05:35:12.556213 | orchestrator | Monday 16 February 2026 05:34:33 +0000 (0:00:25.083) 0:03:30.647 ******* 2026-02-16 05:35:12.556219 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:35:12.556225 | orchestrator | 2026-02-16 05:35:12.556231 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-16 05:35:12.556237 | orchestrator | Monday 16 February 2026 05:34:39 +0000 (0:00:05.268) 0:03:35.915 ******* 2026-02-16 05:35:12.556243 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-16 05:35:12.556249 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-16 05:35:12.556255 | orchestrator | mariadb_bootstrap_restart 2026-02-16 05:35:12.556261 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:35:12.556267 | orchestrator | 2026-02-16 05:35:12.556273 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-16 05:35:12.556279 | orchestrator | skipping: no hosts matched 2026-02-16 05:35:12.556285 | orchestrator | 2026-02-16 05:35:12.556291 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-16 05:35:12.556297 | orchestrator | skipping: no hosts matched 2026-02-16 05:35:12.556303 | orchestrator | 2026-02-16 05:35:12.556335 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-16 05:35:12.556343 | orchestrator | 2026-02-16 05:35:12.556349 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-16 05:35:12.556355 | orchestrator | Monday 16 February 2026 05:34:43 +0000 (0:00:04.370) 0:03:40.285 ******* 2026-02-16 05:35:12.556361 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:35:12.556367 | orchestrator | 2026-02-16 05:35:12.556373 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-16 05:35:12.556379 | orchestrator | Monday 16 February 2026 05:34:45 +0000 (0:00:01.838) 0:03:42.124 ******* 2026-02-16 05:35:12.556386 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:35:12.556392 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:35:12.556398 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:35:12.556407 | orchestrator | 2026-02-16 05:35:12.556418 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-16 05:35:12.556428 | orchestrator | Monday 16 February 2026 05:34:48 +0000 (0:00:03.163) 0:03:45.288 ******* 2026-02-16 05:35:12.556438 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:35:12.556447 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:35:12.556457 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:35:12.556476 | orchestrator | 2026-02-16 05:35:12.556486 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-16 05:35:12.556497 | orchestrator | Monday 16 February 2026 05:34:51 +0000 (0:00:03.192) 0:03:48.481 ******* 2026-02-16 05:35:12.556503 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:35:12.556509 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:35:12.556515 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:35:12.556521 | orchestrator | 2026-02-16 05:35:12.556527 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-16 05:35:12.556534 | orchestrator | Monday 16 February 2026 05:34:54 +0000 (0:00:03.160) 0:03:51.642 ******* 2026-02-16 05:35:12.556539 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:35:12.556546 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:35:12.556552 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:35:12.556558 | orchestrator | 2026-02-16 05:35:12.556564 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-16 05:35:12.556574 | orchestrator | Monday 16 February 2026 05:34:58 +0000 (0:00:03.509) 0:03:55.151 ******* 2026-02-16 05:35:12.556580 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:35:12.556586 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:35:12.556593 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:35:12.556599 | orchestrator | 2026-02-16 05:35:12.556605 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-16 05:35:12.556611 | orchestrator | Monday 16 February 2026 05:35:04 +0000 (0:00:06.211) 0:04:01.363 ******* 2026-02-16 05:35:12.556617 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:35:12.556623 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:35:12.556629 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:35:12.556635 | orchestrator | 2026-02-16 05:35:12.556641 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-16 05:35:12.556647 | orchestrator | Monday 16 February 2026 05:35:07 +0000 (0:00:03.057) 0:04:04.420 ******* 2026-02-16 05:35:12.556653 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:35:12.556659 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:35:12.556665 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:35:12.556672 | orchestrator | 2026-02-16 05:35:12.556678 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-16 05:35:12.556684 | orchestrator | Monday 16 February 2026 05:35:09 +0000 (0:00:01.372) 0:04:05.793 ******* 2026-02-16 05:35:12.556690 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:35:12.556696 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:35:12.556702 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:35:12.556708 | orchestrator | 2026-02-16 05:35:12.556714 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-16 05:35:12.556725 | orchestrator | Monday 16 February 2026 05:35:12 +0000 (0:00:03.510) 0:04:09.304 ******* 2026-02-16 05:35:34.128471 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:35:34.128637 | orchestrator | 2026-02-16 05:35:34.128657 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-02-16 05:35:34.128669 | orchestrator | Monday 16 February 2026 05:35:14 +0000 (0:00:02.013) 0:04:11.317 ******* 2026-02-16 05:35:34.128679 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:35:34.128690 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:35:34.128700 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:35:34.128710 | orchestrator | 2026-02-16 05:35:34.128720 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 05:35:34.128731 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-16 05:35:34.128743 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-16 05:35:34.128752 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-16 05:35:34.128787 | orchestrator | 2026-02-16 05:35:34.128798 | orchestrator | 2026-02-16 05:35:34.128808 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 05:35:34.128817 | orchestrator | Monday 16 February 2026 05:35:33 +0000 (0:00:19.112) 0:04:30.429 ******* 2026-02-16 05:35:34.128827 | orchestrator | =============================================================================== 2026-02-16 05:35:34.128836 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 78.07s 2026-02-16 05:35:34.128846 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 19.11s 2026-02-16 05:35:34.128855 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 19.10s 2026-02-16 05:35:34.128865 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ----------------------- 10.46s 2026-02-16 05:35:34.128874 | orchestrator | service-check : mariadb | Get container facts --------------------------- 6.21s 2026-02-16 05:35:34.128884 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.09s 2026-02-16 05:35:34.128893 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.41s 2026-02-16 05:35:34.128902 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.15s 2026-02-16 05:35:34.128911 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.79s 2026-02-16 05:35:34.128921 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.70s 2026-02-16 05:35:34.128930 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.66s 2026-02-16 05:35:34.128941 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.60s 2026-02-16 05:35:34.128952 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 3.53s 2026-02-16 05:35:34.128963 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.51s 2026-02-16 05:35:34.128974 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 3.51s 2026-02-16 05:35:34.128985 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.48s 2026-02-16 05:35:34.128996 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 3.37s 2026-02-16 05:35:34.129006 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 3.19s 2026-02-16 05:35:34.129015 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 3.16s 2026-02-16 05:35:34.129024 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 3.16s 2026-02-16 05:35:34.413375 | orchestrator | + osism apply -a upgrade rabbitmq 2026-02-16 05:35:36.431111 | orchestrator | 2026-02-16 05:35:36 | INFO  | Task 2c1a36ee-729f-41c1-9d34-3e9f6f5182b3 (rabbitmq) was prepared for execution. 2026-02-16 05:35:36.431253 | orchestrator | 2026-02-16 05:35:36 | INFO  | It takes a moment until task 2c1a36ee-729f-41c1-9d34-3e9f6f5182b3 (rabbitmq) has been started and output is visible here. 2026-02-16 05:36:21.373082 | orchestrator | 2026-02-16 05:36:21.373223 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 05:36:21.373250 | orchestrator | 2026-02-16 05:36:21.373269 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 05:36:21.373385 | orchestrator | Monday 16 February 2026 05:35:42 +0000 (0:00:01.552) 0:00:01.552 ******* 2026-02-16 05:36:21.373405 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:36:21.373422 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:36:21.373437 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:36:21.373453 | orchestrator | 2026-02-16 05:36:21.373470 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 05:36:21.373484 | orchestrator | Monday 16 February 2026 05:35:44 +0000 (0:00:02.455) 0:00:04.007 ******* 2026-02-16 05:36:21.373499 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-16 05:36:21.373515 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-16 05:36:21.373562 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-16 05:36:21.373581 | orchestrator | 2026-02-16 05:36:21.373598 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-16 05:36:21.373615 | orchestrator | 2026-02-16 05:36:21.373632 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-16 05:36:21.373649 | orchestrator | Monday 16 February 2026 05:35:46 +0000 (0:00:02.374) 0:00:06.382 ******* 2026-02-16 05:36:21.373666 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:36:21.373685 | orchestrator | 2026-02-16 05:36:21.373702 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-16 05:36:21.373719 | orchestrator | Monday 16 February 2026 05:35:49 +0000 (0:00:02.220) 0:00:08.602 ******* 2026-02-16 05:36:21.373738 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:36:21.373755 | orchestrator | 2026-02-16 05:36:21.373774 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-16 05:36:21.373791 | orchestrator | Monday 16 February 2026 05:35:51 +0000 (0:00:02.310) 0:00:10.912 ******* 2026-02-16 05:36:21.373807 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:36:21.373824 | orchestrator | 2026-02-16 05:36:21.373840 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-16 05:36:21.373856 | orchestrator | Monday 16 February 2026 05:35:54 +0000 (0:00:03.323) 0:00:14.236 ******* 2026-02-16 05:36:21.373873 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:36:21.373891 | orchestrator | 2026-02-16 05:36:21.373909 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-16 05:36:21.373925 | orchestrator | Monday 16 February 2026 05:36:04 +0000 (0:00:10.104) 0:00:24.341 ******* 2026-02-16 05:36:21.373941 | orchestrator | ok: [testbed-node-0] => { 2026-02-16 05:36:21.373957 | orchestrator |  "changed": false, 2026-02-16 05:36:21.373974 | orchestrator |  "msg": "All assertions passed" 2026-02-16 05:36:21.373991 | orchestrator | } 2026-02-16 05:36:21.374007 | orchestrator | 2026-02-16 05:36:21.374112 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-16 05:36:21.374132 | orchestrator | Monday 16 February 2026 05:36:06 +0000 (0:00:01.326) 0:00:25.668 ******* 2026-02-16 05:36:21.374149 | orchestrator | ok: [testbed-node-0] => { 2026-02-16 05:36:21.374166 | orchestrator |  "changed": false, 2026-02-16 05:36:21.374183 | orchestrator |  "msg": "All assertions passed" 2026-02-16 05:36:21.374201 | orchestrator | } 2026-02-16 05:36:21.374218 | orchestrator | 2026-02-16 05:36:21.374236 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-16 05:36:21.374253 | orchestrator | Monday 16 February 2026 05:36:07 +0000 (0:00:01.688) 0:00:27.356 ******* 2026-02-16 05:36:21.374271 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:36:21.374319 | orchestrator | 2026-02-16 05:36:21.374338 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-16 05:36:21.374354 | orchestrator | Monday 16 February 2026 05:36:09 +0000 (0:00:01.725) 0:00:29.082 ******* 2026-02-16 05:36:21.374371 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:36:21.374389 | orchestrator | 2026-02-16 05:36:21.374407 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-16 05:36:21.374425 | orchestrator | Monday 16 February 2026 05:36:11 +0000 (0:00:02.257) 0:00:31.339 ******* 2026-02-16 05:36:21.374442 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:36:21.374460 | orchestrator | 2026-02-16 05:36:21.374478 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-16 05:36:21.374496 | orchestrator | Monday 16 February 2026 05:36:14 +0000 (0:00:03.041) 0:00:34.380 ******* 2026-02-16 05:36:21.374513 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:36:21.374529 | orchestrator | 2026-02-16 05:36:21.374547 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-16 05:36:21.374584 | orchestrator | Monday 16 February 2026 05:36:16 +0000 (0:00:02.018) 0:00:36.398 ******* 2026-02-16 05:36:21.374663 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 05:36:21.374690 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 05:36:21.374711 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 05:36:21.374729 | orchestrator | 2026-02-16 05:36:21.374747 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-16 05:36:21.374765 | orchestrator | Monday 16 February 2026 05:36:18 +0000 (0:00:01.847) 0:00:38.246 ******* 2026-02-16 05:36:21.374784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 05:36:21.374835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 05:36:40.607898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 05:36:40.608014 | orchestrator | 2026-02-16 05:36:40.608030 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-16 05:36:40.608044 | orchestrator | Monday 16 February 2026 05:36:21 +0000 (0:00:02.507) 0:00:40.753 ******* 2026-02-16 05:36:40.608062 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-16 05:36:40.608081 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-16 05:36:40.608111 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-16 05:36:40.608127 | orchestrator | 2026-02-16 05:36:40.608146 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-16 05:36:40.608164 | orchestrator | Monday 16 February 2026 05:36:23 +0000 (0:00:02.471) 0:00:43.225 ******* 2026-02-16 05:36:40.608180 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-16 05:36:40.608195 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-16 05:36:40.608214 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-16 05:36:40.608260 | orchestrator | 2026-02-16 05:36:40.608388 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-16 05:36:40.608403 | orchestrator | Monday 16 February 2026 05:36:26 +0000 (0:00:02.948) 0:00:46.174 ******* 2026-02-16 05:36:40.608413 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-16 05:36:40.608423 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-16 05:36:40.608435 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-16 05:36:40.608446 | orchestrator | 2026-02-16 05:36:40.608457 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-16 05:36:40.608468 | orchestrator | Monday 16 February 2026 05:36:29 +0000 (0:00:02.398) 0:00:48.572 ******* 2026-02-16 05:36:40.608479 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-16 05:36:40.608490 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-16 05:36:40.608501 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-16 05:36:40.608511 | orchestrator | 2026-02-16 05:36:40.608522 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-16 05:36:40.608532 | orchestrator | Monday 16 February 2026 05:36:31 +0000 (0:00:02.472) 0:00:51.045 ******* 2026-02-16 05:36:40.608543 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-16 05:36:40.608554 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-16 05:36:40.608579 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-16 05:36:40.608590 | orchestrator | 2026-02-16 05:36:40.608601 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-16 05:36:40.608612 | orchestrator | Monday 16 February 2026 05:36:33 +0000 (0:00:02.318) 0:00:53.364 ******* 2026-02-16 05:36:40.608623 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-16 05:36:40.608635 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-16 05:36:40.608645 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-16 05:36:40.608655 | orchestrator | 2026-02-16 05:36:40.608664 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-16 05:36:40.608674 | orchestrator | Monday 16 February 2026 05:36:36 +0000 (0:00:02.488) 0:00:55.852 ******* 2026-02-16 05:36:40.608683 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:36:40.608693 | orchestrator | 2026-02-16 05:36:40.608722 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-02-16 05:36:40.608733 | orchestrator | Monday 16 February 2026 05:36:38 +0000 (0:00:01.684) 0:00:57.537 ******* 2026-02-16 05:36:40.608744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 05:36:40.608767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 05:36:40.608783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 05:36:40.608794 | orchestrator | 2026-02-16 05:36:40.608804 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-02-16 05:36:40.608813 | orchestrator | Monday 16 February 2026 05:36:40 +0000 (0:00:02.318) 0:00:59.855 ******* 2026-02-16 05:36:40.608837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-16 05:36:50.279104 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:36:50.279248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-16 05:36:50.279386 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:36:50.279411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-16 05:36:50.279430 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:36:50.279447 | orchestrator | 2026-02-16 05:36:50.279465 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-02-16 05:36:50.279505 | orchestrator | Monday 16 February 2026 05:36:42 +0000 (0:00:01.648) 0:01:01.503 ******* 2026-02-16 05:36:50.279544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-16 05:36:50.279564 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:36:50.279609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-16 05:36:50.279644 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:36:50.279662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-16 05:36:50.279678 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:36:50.279694 | orchestrator | 2026-02-16 05:36:50.279711 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-16 05:36:50.279727 | orchestrator | Monday 16 February 2026 05:36:43 +0000 (0:00:01.809) 0:01:03.313 ******* 2026-02-16 05:36:50.279744 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:36:50.279761 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:36:50.279777 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:36:50.279794 | orchestrator | 2026-02-16 05:36:50.279811 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-02-16 05:36:50.279829 | orchestrator | Monday 16 February 2026 05:36:47 +0000 (0:00:04.041) 0:01:07.355 ******* 2026-02-16 05:36:50.279855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 05:36:50.279888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 05:38:41.156190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-16 05:38:41.156352 | orchestrator | 2026-02-16 05:38:41.156371 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-02-16 05:38:41.156385 | orchestrator | Monday 16 February 2026 05:36:50 +0000 (0:00:02.328) 0:01:09.684 ******* 2026-02-16 05:38:41.156397 | orchestrator | changed: [testbed-node-0] => { 2026-02-16 05:38:41.156409 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:38:41.156421 | orchestrator | } 2026-02-16 05:38:41.156432 | orchestrator | changed: [testbed-node-1] => { 2026-02-16 05:38:41.156442 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:38:41.156454 | orchestrator | } 2026-02-16 05:38:41.156464 | orchestrator | changed: [testbed-node-2] => { 2026-02-16 05:38:41.156475 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:38:41.156486 | orchestrator | } 2026-02-16 05:38:41.156497 | orchestrator | 2026-02-16 05:38:41.156508 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-16 05:38:41.156519 | orchestrator | Monday 16 February 2026 05:36:51 +0000 (0:00:01.376) 0:01:11.061 ******* 2026-02-16 05:38:41.156548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-16 05:38:41.156562 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:38:41.156574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-16 05:38:41.156608 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:38:41.156640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-16 05:38:41.156653 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:38:41.156664 | orchestrator | 2026-02-16 05:38:41.156675 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-16 05:38:41.156686 | orchestrator | Monday 16 February 2026 05:36:53 +0000 (0:00:02.032) 0:01:13.093 ******* 2026-02-16 05:38:41.156697 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:38:41.156708 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:38:41.156721 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:38:41.156733 | orchestrator | 2026-02-16 05:38:41.156746 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-16 05:38:41.156758 | orchestrator | 2026-02-16 05:38:41.156770 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-16 05:38:41.156783 | orchestrator | Monday 16 February 2026 05:36:55 +0000 (0:00:01.879) 0:01:14.973 ******* 2026-02-16 05:38:41.156795 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:38:41.156808 | orchestrator | 2026-02-16 05:38:41.156821 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-16 05:38:41.156831 | orchestrator | Monday 16 February 2026 05:36:57 +0000 (0:00:02.169) 0:01:17.143 ******* 2026-02-16 05:38:41.156842 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:38:41.156852 | orchestrator | 2026-02-16 05:38:41.156863 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-16 05:38:41.156874 | orchestrator | Monday 16 February 2026 05:37:08 +0000 (0:00:10.278) 0:01:27.421 ******* 2026-02-16 05:38:41.156885 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:38:41.156896 | orchestrator | 2026-02-16 05:38:41.156906 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-16 05:38:41.156917 | orchestrator | Monday 16 February 2026 05:37:17 +0000 (0:00:09.158) 0:01:36.580 ******* 2026-02-16 05:38:41.156928 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:38:41.156939 | orchestrator | 2026-02-16 05:38:41.156949 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-16 05:38:41.156969 | orchestrator | 2026-02-16 05:38:41.156980 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-16 05:38:41.156991 | orchestrator | Monday 16 February 2026 05:37:28 +0000 (0:00:11.123) 0:01:47.704 ******* 2026-02-16 05:38:41.157001 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:38:41.157012 | orchestrator | 2026-02-16 05:38:41.157023 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-16 05:38:41.157033 | orchestrator | Monday 16 February 2026 05:37:30 +0000 (0:00:01.897) 0:01:49.601 ******* 2026-02-16 05:38:41.157044 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:38:41.157054 | orchestrator | 2026-02-16 05:38:41.157065 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-16 05:38:41.157076 | orchestrator | Monday 16 February 2026 05:37:39 +0000 (0:00:09.612) 0:01:59.213 ******* 2026-02-16 05:38:41.157086 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:38:41.157097 | orchestrator | 2026-02-16 05:38:41.157108 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-16 05:38:41.157119 | orchestrator | Monday 16 February 2026 05:37:54 +0000 (0:00:14.365) 0:02:13.579 ******* 2026-02-16 05:38:41.157129 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:38:41.157140 | orchestrator | 2026-02-16 05:38:41.157151 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-16 05:38:41.157161 | orchestrator | 2026-02-16 05:38:41.157172 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-16 05:38:41.157183 | orchestrator | Monday 16 February 2026 05:38:04 +0000 (0:00:10.270) 0:02:23.849 ******* 2026-02-16 05:38:41.157193 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:38:41.157204 | orchestrator | 2026-02-16 05:38:41.157216 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-16 05:38:41.157226 | orchestrator | Monday 16 February 2026 05:38:06 +0000 (0:00:01.746) 0:02:25.596 ******* 2026-02-16 05:38:41.157237 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:38:41.157270 | orchestrator | 2026-02-16 05:38:41.157281 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-16 05:38:41.157375 | orchestrator | Monday 16 February 2026 05:38:16 +0000 (0:00:09.860) 0:02:35.457 ******* 2026-02-16 05:38:41.157395 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:38:41.157407 | orchestrator | 2026-02-16 05:38:41.157417 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-16 05:38:41.157428 | orchestrator | Monday 16 February 2026 05:38:30 +0000 (0:00:14.296) 0:02:49.753 ******* 2026-02-16 05:38:41.157439 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:38:41.157449 | orchestrator | 2026-02-16 05:38:41.157460 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-16 05:38:41.157470 | orchestrator | 2026-02-16 05:38:41.157481 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-16 05:38:41.157502 | orchestrator | Monday 16 February 2026 05:38:41 +0000 (0:00:10.799) 0:03:00.552 ******* 2026-02-16 05:38:47.463362 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:38:47.463475 | orchestrator | 2026-02-16 05:38:47.463494 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-16 05:38:47.463506 | orchestrator | Monday 16 February 2026 05:38:42 +0000 (0:00:01.329) 0:03:01.881 ******* 2026-02-16 05:38:47.463517 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:38:47.463529 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:38:47.463540 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:38:47.463551 | orchestrator | 2026-02-16 05:38:47.463563 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 05:38:47.463575 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-16 05:38:47.463588 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-16 05:38:47.463632 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-16 05:38:47.463643 | orchestrator | 2026-02-16 05:38:47.463654 | orchestrator | 2026-02-16 05:38:47.463665 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 05:38:47.463676 | orchestrator | Monday 16 February 2026 05:38:47 +0000 (0:00:04.646) 0:03:06.528 ******* 2026-02-16 05:38:47.463686 | orchestrator | =============================================================================== 2026-02-16 05:38:47.463697 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 37.82s 2026-02-16 05:38:47.463707 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 32.19s 2026-02-16 05:38:47.463718 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 29.75s 2026-02-16 05:38:47.463728 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------ 10.10s 2026-02-16 05:38:47.463739 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.81s 2026-02-16 05:38:47.463749 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.65s 2026-02-16 05:38:47.463760 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.04s 2026-02-16 05:38:47.463770 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.32s 2026-02-16 05:38:47.463781 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 3.04s 2026-02-16 05:38:47.463792 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.95s 2026-02-16 05:38:47.463802 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.51s 2026-02-16 05:38:47.463813 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.49s 2026-02-16 05:38:47.463823 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.47s 2026-02-16 05:38:47.463836 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.47s 2026-02-16 05:38:47.463849 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.46s 2026-02-16 05:38:47.463861 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.40s 2026-02-16 05:38:47.463873 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.37s 2026-02-16 05:38:47.463885 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 2.33s 2026-02-16 05:38:47.463912 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.32s 2026-02-16 05:38:47.463925 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.32s 2026-02-16 05:38:47.766413 | orchestrator | + osism apply -a upgrade openvswitch 2026-02-16 05:38:49.719750 | orchestrator | 2026-02-16 05:38:49 | INFO  | Task 6ccb0aed-ef62-4ff6-9d51-f23d29dcef85 (openvswitch) was prepared for execution. 2026-02-16 05:38:49.719862 | orchestrator | 2026-02-16 05:38:49 | INFO  | It takes a moment until task 6ccb0aed-ef62-4ff6-9d51-f23d29dcef85 (openvswitch) has been started and output is visible here. 2026-02-16 05:39:14.752773 | orchestrator | 2026-02-16 05:39:14.752887 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 05:39:14.752912 | orchestrator | 2026-02-16 05:39:14.752931 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 05:39:14.752950 | orchestrator | Monday 16 February 2026 05:38:55 +0000 (0:00:01.497) 0:00:01.497 ******* 2026-02-16 05:39:14.752968 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:39:14.752989 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:39:14.753009 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:39:14.753027 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:39:14.753046 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:39:14.753064 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:39:14.753081 | orchestrator | 2026-02-16 05:39:14.753100 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 05:39:14.753148 | orchestrator | Monday 16 February 2026 05:38:58 +0000 (0:00:02.852) 0:00:04.350 ******* 2026-02-16 05:39:14.753160 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-16 05:39:14.753170 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-16 05:39:14.753179 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-16 05:39:14.753189 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-16 05:39:14.753198 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-16 05:39:14.753208 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-16 05:39:14.753217 | orchestrator | 2026-02-16 05:39:14.753227 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-16 05:39:14.753270 | orchestrator | 2026-02-16 05:39:14.753289 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-16 05:39:14.753306 | orchestrator | Monday 16 February 2026 05:39:01 +0000 (0:00:02.998) 0:00:07.348 ******* 2026-02-16 05:39:14.753326 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 05:39:14.753344 | orchestrator | 2026-02-16 05:39:14.753363 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-16 05:39:14.753382 | orchestrator | Monday 16 February 2026 05:39:03 +0000 (0:00:02.237) 0:00:09.586 ******* 2026-02-16 05:39:14.753400 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-16 05:39:14.753419 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-16 05:39:14.753433 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-16 05:39:14.753444 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-16 05:39:14.753455 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-16 05:39:14.753464 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-16 05:39:14.753474 | orchestrator | 2026-02-16 05:39:14.753484 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-16 05:39:14.753493 | orchestrator | Monday 16 February 2026 05:39:05 +0000 (0:00:02.049) 0:00:11.636 ******* 2026-02-16 05:39:14.753503 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-16 05:39:14.753512 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-16 05:39:14.753521 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-16 05:39:14.753531 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-16 05:39:14.753540 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-16 05:39:14.753549 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-16 05:39:14.753559 | orchestrator | 2026-02-16 05:39:14.753568 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-16 05:39:14.753578 | orchestrator | Monday 16 February 2026 05:39:08 +0000 (0:00:02.574) 0:00:14.210 ******* 2026-02-16 05:39:14.753587 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-16 05:39:14.753597 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:39:14.753607 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-16 05:39:14.753617 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:39:14.753628 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-16 05:39:14.753645 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:39:14.753660 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-16 05:39:14.753675 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:39:14.753691 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-16 05:39:14.753707 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:39:14.753723 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-16 05:39:14.753754 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:39:14.753772 | orchestrator | 2026-02-16 05:39:14.753788 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-16 05:39:14.753805 | orchestrator | Monday 16 February 2026 05:39:10 +0000 (0:00:02.353) 0:00:16.564 ******* 2026-02-16 05:39:14.753823 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:39:14.753839 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:39:14.753855 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:39:14.753866 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:39:14.753892 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:39:14.753901 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:39:14.753911 | orchestrator | 2026-02-16 05:39:14.753920 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-16 05:39:14.753930 | orchestrator | Monday 16 February 2026 05:39:12 +0000 (0:00:01.867) 0:00:18.432 ******* 2026-02-16 05:39:14.753965 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 05:39:14.753982 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 05:39:14.753992 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 05:39:14.754002 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 05:39:14.754012 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 05:39:14.754104 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 05:39:14.754137 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 05:39:16.992575 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 05:39:16.992671 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 05:39:16.992682 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 05:39:16.992714 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 05:39:16.992735 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 05:39:16.992743 | orchestrator | 2026-02-16 05:39:16.992751 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-16 05:39:16.992760 | orchestrator | Monday 16 February 2026 05:39:14 +0000 (0:00:02.379) 0:00:20.812 ******* 2026-02-16 05:39:16.992781 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 05:39:16.992788 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 05:39:16.992795 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 05:39:16.992806 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 05:39:16.992816 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 05:39:16.992822 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 05:39:16.992834 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 05:39:22.302488 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 05:39:22.302568 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 05:39:22.302593 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 05:39:22.302610 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 05:39:22.302615 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 05:39:22.302621 | orchestrator | 2026-02-16 05:39:22.302628 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-16 05:39:22.302635 | orchestrator | Monday 16 February 2026 05:39:18 +0000 (0:00:03.299) 0:00:24.111 ******* 2026-02-16 05:39:22.302640 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:39:22.302646 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:39:22.302651 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:39:22.302657 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:39:22.302662 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:39:22.302667 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:39:22.302672 | orchestrator | 2026-02-16 05:39:22.302677 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-02-16 05:39:22.302693 | orchestrator | Monday 16 February 2026 05:39:20 +0000 (0:00:02.210) 0:00:26.321 ******* 2026-02-16 05:39:22.302699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 05:39:22.302712 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 05:39:22.302717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 05:39:22.302726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 05:39:22.302732 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 05:39:22.302742 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-16 05:39:26.032426 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 05:39:26.032568 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 05:39:26.032618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 05:39:26.032640 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 05:39:26.032662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 05:39:26.032708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-16 05:39:26.032748 | orchestrator | 2026-02-16 05:39:26.032764 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-02-16 05:39:26.032776 | orchestrator | Monday 16 February 2026 05:39:23 +0000 (0:00:03.298) 0:00:29.620 ******* 2026-02-16 05:39:26.032788 | orchestrator | changed: [testbed-node-0] => { 2026-02-16 05:39:26.032801 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:39:26.032811 | orchestrator | } 2026-02-16 05:39:26.032822 | orchestrator | changed: [testbed-node-1] => { 2026-02-16 05:39:26.032840 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:39:26.032859 | orchestrator | } 2026-02-16 05:39:26.032883 | orchestrator | changed: [testbed-node-2] => { 2026-02-16 05:39:26.032909 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:39:26.032929 | orchestrator | } 2026-02-16 05:39:26.032948 | orchestrator | changed: [testbed-node-3] => { 2026-02-16 05:39:26.032966 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:39:26.032981 | orchestrator | } 2026-02-16 05:39:26.032997 | orchestrator | changed: [testbed-node-4] => { 2026-02-16 05:39:26.033014 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:39:26.033031 | orchestrator | } 2026-02-16 05:39:26.033048 | orchestrator | changed: [testbed-node-5] => { 2026-02-16 05:39:26.033064 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:39:26.033082 | orchestrator | } 2026-02-16 05:39:26.033099 | orchestrator | 2026-02-16 05:39:26.033116 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-16 05:39:26.033133 | orchestrator | Monday 16 February 2026 05:39:25 +0000 (0:00:02.022) 0:00:31.643 ******* 2026-02-16 05:39:26.033152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-16 05:39:26.033186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-16 05:39:26.033204 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:39:26.033223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-16 05:39:26.033290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-16 05:39:26.033328 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:39:56.815810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-16 05:39:56.815931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-16 05:39:56.815949 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:39:56.815978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-16 05:39:56.815992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-16 05:39:56.816027 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:39:56.816040 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-16 05:39:56.816071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-16 05:39:56.816083 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:39:56.816096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-16 05:39:56.816109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-16 05:39:56.816121 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:39:56.816133 | orchestrator | 2026-02-16 05:39:56.816152 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-16 05:39:56.816166 | orchestrator | Monday 16 February 2026 05:39:28 +0000 (0:00:02.556) 0:00:34.199 ******* 2026-02-16 05:39:56.816178 | orchestrator | 2026-02-16 05:39:56.816190 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-16 05:39:56.816200 | orchestrator | Monday 16 February 2026 05:39:28 +0000 (0:00:00.513) 0:00:34.713 ******* 2026-02-16 05:39:56.816211 | orchestrator | 2026-02-16 05:39:56.816269 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-16 05:39:56.816282 | orchestrator | Monday 16 February 2026 05:39:29 +0000 (0:00:00.496) 0:00:35.209 ******* 2026-02-16 05:39:56.816301 | orchestrator | 2026-02-16 05:39:56.816312 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-16 05:39:56.816323 | orchestrator | Monday 16 February 2026 05:39:29 +0000 (0:00:00.519) 0:00:35.729 ******* 2026-02-16 05:39:56.816334 | orchestrator | 2026-02-16 05:39:56.816346 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-16 05:39:56.816358 | orchestrator | Monday 16 February 2026 05:39:30 +0000 (0:00:00.715) 0:00:36.444 ******* 2026-02-16 05:39:56.816370 | orchestrator | 2026-02-16 05:39:56.816383 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-16 05:39:56.816396 | orchestrator | Monday 16 February 2026 05:39:30 +0000 (0:00:00.513) 0:00:36.958 ******* 2026-02-16 05:39:56.816407 | orchestrator | 2026-02-16 05:39:56.816419 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-16 05:39:56.816432 | orchestrator | Monday 16 February 2026 05:39:31 +0000 (0:00:00.844) 0:00:37.802 ******* 2026-02-16 05:39:56.816444 | orchestrator | changed: [testbed-node-3] 2026-02-16 05:39:56.816457 | orchestrator | changed: [testbed-node-4] 2026-02-16 05:39:56.816469 | orchestrator | changed: [testbed-node-5] 2026-02-16 05:39:56.816482 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:39:56.816494 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:39:56.816506 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:39:56.816518 | orchestrator | 2026-02-16 05:39:56.816531 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-16 05:39:56.816543 | orchestrator | Monday 16 February 2026 05:39:43 +0000 (0:00:11.715) 0:00:49.518 ******* 2026-02-16 05:39:56.816557 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:39:56.816570 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:39:56.816582 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:39:56.816594 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:39:56.816607 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:39:56.816618 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:39:56.816631 | orchestrator | 2026-02-16 05:39:56.816641 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-16 05:39:56.816652 | orchestrator | Monday 16 February 2026 05:39:45 +0000 (0:00:02.477) 0:00:51.995 ******* 2026-02-16 05:39:56.816663 | orchestrator | changed: [testbed-node-3] 2026-02-16 05:39:56.816673 | orchestrator | changed: [testbed-node-4] 2026-02-16 05:39:56.816684 | orchestrator | changed: [testbed-node-5] 2026-02-16 05:39:56.816694 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:39:56.816705 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:39:56.816715 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:39:56.816726 | orchestrator | 2026-02-16 05:39:56.816737 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-16 05:39:56.816754 | orchestrator | Monday 16 February 2026 05:39:56 +0000 (0:00:10.872) 0:01:02.868 ******* 2026-02-16 05:40:12.796097 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-16 05:40:12.796264 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-16 05:40:12.796282 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-16 05:40:12.796290 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-16 05:40:12.796297 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-16 05:40:12.796305 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-16 05:40:12.796313 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-16 05:40:12.796320 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-16 05:40:12.796327 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-16 05:40:12.796358 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-16 05:40:12.796365 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-16 05:40:12.796372 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-16 05:40:12.796380 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-16 05:40:12.796387 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-16 05:40:12.796394 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-16 05:40:12.796401 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-16 05:40:12.796422 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-16 05:40:12.796429 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-16 05:40:12.796437 | orchestrator | 2026-02-16 05:40:12.796446 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-16 05:40:12.796454 | orchestrator | Monday 16 February 2026 05:40:04 +0000 (0:00:07.863) 0:01:10.732 ******* 2026-02-16 05:40:12.796462 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-16 05:40:12.796471 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:40:12.796481 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-16 05:40:12.796489 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:40:12.796497 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-16 05:40:12.796506 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:40:12.796514 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-02-16 05:40:12.796523 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-02-16 05:40:12.796531 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-02-16 05:40:12.796540 | orchestrator | 2026-02-16 05:40:12.796549 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-16 05:40:12.796557 | orchestrator | Monday 16 February 2026 05:40:07 +0000 (0:00:03.201) 0:01:13.934 ******* 2026-02-16 05:40:12.796566 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-16 05:40:12.796574 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:40:12.796583 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-16 05:40:12.796591 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:40:12.796600 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-16 05:40:12.796609 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:40:12.796618 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-16 05:40:12.796626 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-16 05:40:12.796650 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-16 05:40:12.796660 | orchestrator | 2026-02-16 05:40:12.796670 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 05:40:12.796690 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-16 05:40:12.796702 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-16 05:40:12.796712 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-16 05:40:12.796730 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 05:40:12.796757 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 05:40:12.796766 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 05:40:12.796774 | orchestrator | 2026-02-16 05:40:12.796783 | orchestrator | 2026-02-16 05:40:12.796791 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 05:40:12.796800 | orchestrator | Monday 16 February 2026 05:40:12 +0000 (0:00:04.427) 0:01:18.362 ******* 2026-02-16 05:40:12.796808 | orchestrator | =============================================================================== 2026-02-16 05:40:12.796817 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.72s 2026-02-16 05:40:12.796825 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 10.87s 2026-02-16 05:40:12.796834 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.86s 2026-02-16 05:40:12.796842 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.43s 2026-02-16 05:40:12.796850 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.60s 2026-02-16 05:40:12.796859 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.30s 2026-02-16 05:40:12.796867 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.30s 2026-02-16 05:40:12.796876 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.20s 2026-02-16 05:40:12.796884 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.00s 2026-02-16 05:40:12.796892 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.85s 2026-02-16 05:40:12.796901 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.57s 2026-02-16 05:40:12.796909 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.56s 2026-02-16 05:40:12.796917 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.48s 2026-02-16 05:40:12.796926 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.38s 2026-02-16 05:40:12.796934 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.35s 2026-02-16 05:40:12.796943 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.24s 2026-02-16 05:40:12.796955 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.21s 2026-02-16 05:40:12.796964 | orchestrator | module-load : Load modules ---------------------------------------------- 2.05s 2026-02-16 05:40:12.796973 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 2.02s 2026-02-16 05:40:12.796981 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.87s 2026-02-16 05:40:13.125167 | orchestrator | + osism apply -a upgrade ovn 2026-02-16 05:40:15.244826 | orchestrator | 2026-02-16 05:40:15 | INFO  | Task 157ab5c4-f9dd-480c-a4d9-1de52ccf29ae (ovn) was prepared for execution. 2026-02-16 05:40:15.244900 | orchestrator | 2026-02-16 05:40:15 | INFO  | It takes a moment until task 157ab5c4-f9dd-480c-a4d9-1de52ccf29ae (ovn) has been started and output is visible here. 2026-02-16 05:40:36.259178 | orchestrator | 2026-02-16 05:40:36.259332 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-16 05:40:36.259351 | orchestrator | 2026-02-16 05:40:36.259363 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-16 05:40:36.259374 | orchestrator | Monday 16 February 2026 05:40:21 +0000 (0:00:01.780) 0:00:01.780 ******* 2026-02-16 05:40:36.259386 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:40:36.259397 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:40:36.259432 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:40:36.259444 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:40:36.259454 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:40:36.259465 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:40:36.259475 | orchestrator | 2026-02-16 05:40:36.259487 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-16 05:40:36.259498 | orchestrator | Monday 16 February 2026 05:40:23 +0000 (0:00:02.687) 0:00:04.468 ******* 2026-02-16 05:40:36.259509 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-16 05:40:36.259520 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-16 05:40:36.259530 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-16 05:40:36.259541 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-16 05:40:36.259552 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-16 05:40:36.259562 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-16 05:40:36.259572 | orchestrator | 2026-02-16 05:40:36.259583 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-16 05:40:36.259594 | orchestrator | 2026-02-16 05:40:36.259604 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-16 05:40:36.259615 | orchestrator | Monday 16 February 2026 05:40:26 +0000 (0:00:02.591) 0:00:07.060 ******* 2026-02-16 05:40:36.259626 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 05:40:36.259638 | orchestrator | 2026-02-16 05:40:36.259648 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-16 05:40:36.259659 | orchestrator | Monday 16 February 2026 05:40:29 +0000 (0:00:02.630) 0:00:09.690 ******* 2026-02-16 05:40:36.259672 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:36.259686 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:36.259697 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:36.259708 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:36.259737 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:36.259777 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:36.259791 | orchestrator | 2026-02-16 05:40:36.259803 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-16 05:40:36.259817 | orchestrator | Monday 16 February 2026 05:40:31 +0000 (0:00:02.306) 0:00:11.997 ******* 2026-02-16 05:40:36.259829 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:36.259843 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:36.259856 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:36.259869 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:36.259896 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:36.259908 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:36.259921 | orchestrator | 2026-02-16 05:40:36.259934 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-16 05:40:36.259947 | orchestrator | Monday 16 February 2026 05:40:34 +0000 (0:00:02.513) 0:00:14.511 ******* 2026-02-16 05:40:36.259965 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:36.259986 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:36.260006 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:44.197566 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:44.197695 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:44.197714 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:44.197726 | orchestrator | 2026-02-16 05:40:44.197742 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-16 05:40:44.197763 | orchestrator | Monday 16 February 2026 05:40:36 +0000 (0:00:02.209) 0:00:16.720 ******* 2026-02-16 05:40:44.197783 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:44.197804 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:44.197823 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:44.197933 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:44.197949 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:44.197980 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:44.197992 | orchestrator | 2026-02-16 05:40:44.198003 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-02-16 05:40:44.198014 | orchestrator | Monday 16 February 2026 05:40:39 +0000 (0:00:03.116) 0:00:19.837 ******* 2026-02-16 05:40:44.198125 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:44.198142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:44.198155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:44.198179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:44.198192 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:44.198237 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:40:44.198259 | orchestrator | 2026-02-16 05:40:44.198279 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-02-16 05:40:44.198300 | orchestrator | Monday 16 February 2026 05:40:41 +0000 (0:00:02.599) 0:00:22.437 ******* 2026-02-16 05:40:44.198328 | orchestrator | changed: [testbed-node-0] => { 2026-02-16 05:40:44.198351 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:40:44.198369 | orchestrator | } 2026-02-16 05:40:44.198389 | orchestrator | changed: [testbed-node-1] => { 2026-02-16 05:40:44.198406 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:40:44.198425 | orchestrator | } 2026-02-16 05:40:44.198444 | orchestrator | changed: [testbed-node-2] => { 2026-02-16 05:40:44.198463 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:40:44.198482 | orchestrator | } 2026-02-16 05:40:44.198500 | orchestrator | changed: [testbed-node-3] => { 2026-02-16 05:40:44.198518 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:40:44.198531 | orchestrator | } 2026-02-16 05:40:44.198542 | orchestrator | changed: [testbed-node-4] => { 2026-02-16 05:40:44.198552 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:40:44.198563 | orchestrator | } 2026-02-16 05:40:44.198574 | orchestrator | changed: [testbed-node-5] => { 2026-02-16 05:40:44.198584 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:40:44.198595 | orchestrator | } 2026-02-16 05:40:44.198606 | orchestrator | 2026-02-16 05:40:44.198616 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-16 05:40:44.198627 | orchestrator | Monday 16 February 2026 05:40:44 +0000 (0:00:02.095) 0:00:24.533 ******* 2026-02-16 05:40:44.198651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:41:14.620894 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:41:14.621041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:41:14.621065 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:41:14.621078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:41:14.621090 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:41:14.621101 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:41:14.621141 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:41:14.621153 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:41:14.621164 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:41:14.621175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:41:14.621186 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:41:14.621197 | orchestrator | 2026-02-16 05:41:14.621240 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-16 05:41:14.621253 | orchestrator | Monday 16 February 2026 05:40:46 +0000 (0:00:02.618) 0:00:27.152 ******* 2026-02-16 05:41:14.621263 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:41:14.621275 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:41:14.621286 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:41:14.621296 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:41:14.621307 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:41:14.621317 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:41:14.621328 | orchestrator | 2026-02-16 05:41:14.621354 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-16 05:41:14.621365 | orchestrator | Monday 16 February 2026 05:40:50 +0000 (0:00:03.684) 0:00:30.836 ******* 2026-02-16 05:41:14.621376 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-16 05:41:14.621388 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-16 05:41:14.621399 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-16 05:41:14.621409 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-16 05:41:14.621420 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-16 05:41:14.621431 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-16 05:41:14.621441 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-16 05:41:14.621452 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-16 05:41:14.621462 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-16 05:41:14.621474 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-16 05:41:14.621485 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-16 05:41:14.621514 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-16 05:41:14.621526 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-16 05:41:14.621538 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-16 05:41:14.621558 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-16 05:41:14.621569 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-16 05:41:14.621580 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-16 05:41:14.621591 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-16 05:41:14.621602 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-16 05:41:14.621613 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-16 05:41:14.621623 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-16 05:41:14.621634 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-16 05:41:14.621645 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-16 05:41:14.621655 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-16 05:41:14.621666 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-16 05:41:14.621676 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-16 05:41:14.621687 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-16 05:41:14.621698 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-16 05:41:14.621709 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-16 05:41:14.621719 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-16 05:41:14.621730 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-16 05:41:14.621740 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-16 05:41:14.621751 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-16 05:41:14.621762 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-16 05:41:14.621776 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-16 05:41:14.621797 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-16 05:41:14.621821 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-16 05:41:14.621848 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-16 05:41:14.621876 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-16 05:41:14.621897 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-16 05:41:14.621915 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-16 05:41:14.621933 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-16 05:41:14.621954 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-16 05:41:14.621995 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-16 05:41:14.622090 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-16 05:41:14.622119 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-16 05:41:14.622138 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-16 05:41:14.622172 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-16 05:44:02.482756 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-16 05:44:02.482859 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-16 05:44:02.482872 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-16 05:44:02.482882 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-16 05:44:02.482891 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-16 05:44:02.482899 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-16 05:44:02.482907 | orchestrator | 2026-02-16 05:44:02.482916 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-16 05:44:02.482924 | orchestrator | Monday 16 February 2026 05:41:11 +0000 (0:00:21.126) 0:00:51.963 ******* 2026-02-16 05:44:02.482932 | orchestrator | 2026-02-16 05:44:02.482940 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-16 05:44:02.482948 | orchestrator | Monday 16 February 2026 05:41:11 +0000 (0:00:00.467) 0:00:52.431 ******* 2026-02-16 05:44:02.482956 | orchestrator | 2026-02-16 05:44:02.482965 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-16 05:44:02.482973 | orchestrator | Monday 16 February 2026 05:41:12 +0000 (0:00:00.466) 0:00:52.897 ******* 2026-02-16 05:44:02.482981 | orchestrator | 2026-02-16 05:44:02.482989 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-16 05:44:02.482996 | orchestrator | Monday 16 February 2026 05:41:12 +0000 (0:00:00.466) 0:00:53.364 ******* 2026-02-16 05:44:02.483004 | orchestrator | 2026-02-16 05:44:02.483012 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-16 05:44:02.483020 | orchestrator | Monday 16 February 2026 05:41:13 +0000 (0:00:00.439) 0:00:53.803 ******* 2026-02-16 05:44:02.483028 | orchestrator | 2026-02-16 05:44:02.483036 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-16 05:44:02.483044 | orchestrator | Monday 16 February 2026 05:41:13 +0000 (0:00:00.436) 0:00:54.239 ******* 2026-02-16 05:44:02.483051 | orchestrator | 2026-02-16 05:44:02.483059 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-16 05:44:02.483067 | orchestrator | Monday 16 February 2026 05:41:14 +0000 (0:00:00.806) 0:00:55.046 ******* 2026-02-16 05:44:02.483075 | orchestrator | 2026-02-16 05:44:02.483083 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-02-16 05:44:02.483091 | orchestrator | changed: [testbed-node-3] 2026-02-16 05:44:02.483100 | orchestrator | changed: [testbed-node-5] 2026-02-16 05:44:02.483108 | orchestrator | changed: [testbed-node-4] 2026-02-16 05:44:02.483115 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:44:02.483123 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:44:02.483131 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:44:02.483160 | orchestrator | 2026-02-16 05:44:02.483211 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-16 05:44:02.483220 | orchestrator | 2026-02-16 05:44:02.483228 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-16 05:44:02.483236 | orchestrator | Monday 16 February 2026 05:43:26 +0000 (0:02:11.639) 0:03:06.685 ******* 2026-02-16 05:44:02.483243 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:44:02.483251 | orchestrator | 2026-02-16 05:44:02.483259 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-16 05:44:02.483267 | orchestrator | Monday 16 February 2026 05:43:28 +0000 (0:00:01.879) 0:03:08.565 ******* 2026-02-16 05:44:02.483274 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-16 05:44:02.483284 | orchestrator | 2026-02-16 05:44:02.483307 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-16 05:44:02.483317 | orchestrator | Monday 16 February 2026 05:43:29 +0000 (0:00:01.871) 0:03:10.436 ******* 2026-02-16 05:44:02.483326 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:44:02.483336 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:44:02.483346 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:44:02.483358 | orchestrator | 2026-02-16 05:44:02.483474 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-16 05:44:02.483493 | orchestrator | Monday 16 February 2026 05:43:31 +0000 (0:00:01.974) 0:03:12.411 ******* 2026-02-16 05:44:02.483506 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:44:02.483519 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:44:02.483532 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:44:02.483545 | orchestrator | 2026-02-16 05:44:02.483560 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-16 05:44:02.483574 | orchestrator | Monday 16 February 2026 05:43:33 +0000 (0:00:01.336) 0:03:13.748 ******* 2026-02-16 05:44:02.483587 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:44:02.483600 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:44:02.483613 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:44:02.483627 | orchestrator | 2026-02-16 05:44:02.483641 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-16 05:44:02.483654 | orchestrator | Monday 16 February 2026 05:43:34 +0000 (0:00:01.355) 0:03:15.103 ******* 2026-02-16 05:44:02.483666 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:44:02.483679 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:44:02.483693 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:44:02.483706 | orchestrator | 2026-02-16 05:44:02.483720 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-16 05:44:02.483733 | orchestrator | Monday 16 February 2026 05:43:36 +0000 (0:00:01.567) 0:03:16.670 ******* 2026-02-16 05:44:02.483746 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:44:02.483782 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:44:02.483791 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:44:02.483799 | orchestrator | 2026-02-16 05:44:02.483807 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-16 05:44:02.483815 | orchestrator | Monday 16 February 2026 05:43:37 +0000 (0:00:01.343) 0:03:18.014 ******* 2026-02-16 05:44:02.483823 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:44:02.483831 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:44:02.483839 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:44:02.483847 | orchestrator | 2026-02-16 05:44:02.483855 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-16 05:44:02.483862 | orchestrator | Monday 16 February 2026 05:43:38 +0000 (0:00:01.366) 0:03:19.380 ******* 2026-02-16 05:44:02.483870 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:44:02.483878 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:44:02.483886 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:44:02.483893 | orchestrator | 2026-02-16 05:44:02.483901 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-16 05:44:02.483922 | orchestrator | Monday 16 February 2026 05:43:40 +0000 (0:00:01.767) 0:03:21.148 ******* 2026-02-16 05:44:02.483929 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:44:02.483937 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:44:02.483945 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:44:02.483952 | orchestrator | 2026-02-16 05:44:02.483960 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-16 05:44:02.483968 | orchestrator | Monday 16 February 2026 05:43:42 +0000 (0:00:01.636) 0:03:22.785 ******* 2026-02-16 05:44:02.483975 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:44:02.483983 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:44:02.483991 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:44:02.483999 | orchestrator | 2026-02-16 05:44:02.484006 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-16 05:44:02.484014 | orchestrator | Monday 16 February 2026 05:43:44 +0000 (0:00:01.919) 0:03:24.705 ******* 2026-02-16 05:44:02.484022 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:44:02.484030 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:44:02.484037 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:44:02.484049 | orchestrator | 2026-02-16 05:44:02.484063 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-16 05:44:02.484125 | orchestrator | Monday 16 February 2026 05:43:45 +0000 (0:00:01.529) 0:03:26.234 ******* 2026-02-16 05:44:02.484133 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:44:02.484140 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:44:02.484148 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:44:02.484156 | orchestrator | 2026-02-16 05:44:02.484164 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-16 05:44:02.484255 | orchestrator | Monday 16 February 2026 05:43:47 +0000 (0:00:01.380) 0:03:27.615 ******* 2026-02-16 05:44:02.484264 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:44:02.484271 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:44:02.484279 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:44:02.484287 | orchestrator | 2026-02-16 05:44:02.484295 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-16 05:44:02.484303 | orchestrator | Monday 16 February 2026 05:43:48 +0000 (0:00:01.375) 0:03:28.991 ******* 2026-02-16 05:44:02.484310 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:44:02.484318 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:44:02.484326 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:44:02.484334 | orchestrator | 2026-02-16 05:44:02.484341 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-16 05:44:02.484349 | orchestrator | Monday 16 February 2026 05:43:50 +0000 (0:00:01.875) 0:03:30.867 ******* 2026-02-16 05:44:02.484357 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:44:02.484365 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:44:02.484372 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:44:02.484380 | orchestrator | 2026-02-16 05:44:02.484388 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-16 05:44:02.484396 | orchestrator | Monday 16 February 2026 05:43:51 +0000 (0:00:01.399) 0:03:32.266 ******* 2026-02-16 05:44:02.484404 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:44:02.484411 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:44:02.484419 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:44:02.484427 | orchestrator | 2026-02-16 05:44:02.484435 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-16 05:44:02.484442 | orchestrator | Monday 16 February 2026 05:43:53 +0000 (0:00:02.126) 0:03:34.392 ******* 2026-02-16 05:44:02.484459 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:44:02.484467 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:44:02.484475 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:44:02.484482 | orchestrator | 2026-02-16 05:44:02.484490 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-16 05:44:02.484498 | orchestrator | Monday 16 February 2026 05:43:55 +0000 (0:00:01.395) 0:03:35.788 ******* 2026-02-16 05:44:02.484506 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:44:02.484522 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:44:02.484530 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:44:02.484538 | orchestrator | 2026-02-16 05:44:02.484545 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-16 05:44:02.484553 | orchestrator | Monday 16 February 2026 05:43:56 +0000 (0:00:01.384) 0:03:37.173 ******* 2026-02-16 05:44:02.484561 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:44:02.484569 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:44:02.484576 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:44:02.484584 | orchestrator | 2026-02-16 05:44:02.484592 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-16 05:44:02.484599 | orchestrator | Monday 16 February 2026 05:43:58 +0000 (0:00:01.676) 0:03:38.849 ******* 2026-02-16 05:44:02.484621 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:08.660907 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:08.661049 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:08.661077 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:08.661100 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:08.661119 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:08.661264 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:08.661291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:44:08.661338 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:08.661359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:44:08.661377 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:08.661395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:44:08.661415 | orchestrator | 2026-02-16 05:44:08.661438 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-16 05:44:08.661458 | orchestrator | Monday 16 February 2026 05:44:02 +0000 (0:00:04.089) 0:03:42.939 ******* 2026-02-16 05:44:08.661479 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:08.661527 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:08.661550 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:08.661570 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:08.661604 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:23.632143 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:23.632332 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:23.632356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:44:23.632388 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:23.632410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:44:23.632419 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:23.632427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:44:23.632435 | orchestrator | 2026-02-16 05:44:23.632445 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-02-16 05:44:23.632454 | orchestrator | Monday 16 February 2026 05:44:08 +0000 (0:00:06.181) 0:03:49.120 ******* 2026-02-16 05:44:23.632462 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-02-16 05:44:23.632470 | orchestrator | 2026-02-16 05:44:23.632478 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-02-16 05:44:23.632491 | orchestrator | Monday 16 February 2026 05:44:10 +0000 (0:00:01.917) 0:03:51.038 ******* 2026-02-16 05:44:23.632502 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:44:23.632514 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:44:23.632543 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:44:23.632557 | orchestrator | 2026-02-16 05:44:23.632570 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-02-16 05:44:23.632582 | orchestrator | Monday 16 February 2026 05:44:12 +0000 (0:00:01.797) 0:03:52.835 ******* 2026-02-16 05:44:23.632595 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:44:23.632602 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:44:23.632610 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:44:23.632617 | orchestrator | 2026-02-16 05:44:23.632624 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-02-16 05:44:23.632631 | orchestrator | Monday 16 February 2026 05:44:15 +0000 (0:00:02.882) 0:03:55.718 ******* 2026-02-16 05:44:23.632638 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:44:23.632645 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:44:23.632652 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:44:23.632659 | orchestrator | 2026-02-16 05:44:23.632666 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-02-16 05:44:23.632681 | orchestrator | Monday 16 February 2026 05:44:18 +0000 (0:00:02.777) 0:03:58.495 ******* 2026-02-16 05:44:23.632689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:23.632699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:23.632712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:23.632720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:23.632727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:23.632735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:23.632749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:28.267858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:44:28.267951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:28.267964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:44:28.267988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:44:28.267997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:44:28.268006 | orchestrator | 2026-02-16 05:44:28.268016 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-16 05:44:28.268026 | orchestrator | Monday 16 February 2026 05:44:23 +0000 (0:00:05.589) 0:04:04.084 ******* 2026-02-16 05:44:28.268035 | orchestrator | changed: [testbed-node-0] => { 2026-02-16 05:44:28.268044 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:44:28.268052 | orchestrator | } 2026-02-16 05:44:28.268060 | orchestrator | changed: [testbed-node-1] => { 2026-02-16 05:44:28.268068 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:44:28.268076 | orchestrator | } 2026-02-16 05:44:28.268084 | orchestrator | changed: [testbed-node-2] => { 2026-02-16 05:44:28.268091 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:44:28.268099 | orchestrator | } 2026-02-16 05:44:28.268107 | orchestrator | 2026-02-16 05:44:28.268115 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-16 05:44:28.268123 | orchestrator | Monday 16 February 2026 05:44:25 +0000 (0:00:01.473) 0:04:05.558 ******* 2026-02-16 05:44:28.268133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:44:28.268229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:44:28.268241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:44:28.268250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:44:28.268263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:44:28.268272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:44:28.268280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:44:28.268288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:44:28.268303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-16 05:44:28.268319 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-16 05:45:52.386189 | orchestrator | 2026-02-16 05:45:52.386335 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-02-16 05:45:52.386363 | orchestrator | Monday 16 February 2026 05:44:28 +0000 (0:00:03.167) 0:04:08.726 ******* 2026-02-16 05:45:52.386384 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-02-16 05:45:52.386405 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-02-16 05:45:52.386425 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-02-16 05:45:52.386445 | orchestrator | 2026-02-16 05:45:52.386466 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-16 05:45:52.386488 | orchestrator | Monday 16 February 2026 05:44:30 +0000 (0:00:02.237) 0:04:10.964 ******* 2026-02-16 05:45:52.386508 | orchestrator | changed: [testbed-node-0] => { 2026-02-16 05:45:52.386531 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:45:52.386551 | orchestrator | } 2026-02-16 05:45:52.386563 | orchestrator | changed: [testbed-node-1] => { 2026-02-16 05:45:52.386574 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:45:52.386585 | orchestrator | } 2026-02-16 05:45:52.386596 | orchestrator | changed: [testbed-node-2] => { 2026-02-16 05:45:52.386607 | orchestrator |  "msg": "Notifying handlers" 2026-02-16 05:45:52.386617 | orchestrator | } 2026-02-16 05:45:52.386630 | orchestrator | 2026-02-16 05:45:52.386643 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-16 05:45:52.386656 | orchestrator | Monday 16 February 2026 05:44:31 +0000 (0:00:01.463) 0:04:12.427 ******* 2026-02-16 05:45:52.386668 | orchestrator | 2026-02-16 05:45:52.386681 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-16 05:45:52.386694 | orchestrator | Monday 16 February 2026 05:44:32 +0000 (0:00:00.437) 0:04:12.865 ******* 2026-02-16 05:45:52.386706 | orchestrator | 2026-02-16 05:45:52.386719 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-16 05:45:52.386750 | orchestrator | Monday 16 February 2026 05:44:32 +0000 (0:00:00.446) 0:04:13.312 ******* 2026-02-16 05:45:52.386763 | orchestrator | 2026-02-16 05:45:52.386775 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-16 05:45:52.386788 | orchestrator | Monday 16 February 2026 05:44:33 +0000 (0:00:01.016) 0:04:14.328 ******* 2026-02-16 05:45:52.386800 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:45:52.386812 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:45:52.386824 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:45:52.386836 | orchestrator | 2026-02-16 05:45:52.386848 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-16 05:45:52.386861 | orchestrator | Monday 16 February 2026 05:44:49 +0000 (0:00:15.688) 0:04:30.017 ******* 2026-02-16 05:45:52.386896 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:45:52.386909 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:45:52.386921 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:45:52.386933 | orchestrator | 2026-02-16 05:45:52.386944 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-02-16 05:45:52.386955 | orchestrator | Monday 16 February 2026 05:45:05 +0000 (0:00:16.005) 0:04:46.022 ******* 2026-02-16 05:45:52.386966 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-02-16 05:45:52.386977 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-02-16 05:45:52.386987 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-02-16 05:45:52.386998 | orchestrator | 2026-02-16 05:45:52.387009 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-16 05:45:52.387020 | orchestrator | Monday 16 February 2026 05:45:15 +0000 (0:00:10.451) 0:04:56.474 ******* 2026-02-16 05:45:52.387030 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:45:52.387041 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:45:52.387052 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:45:52.387063 | orchestrator | 2026-02-16 05:45:52.387074 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-16 05:45:52.387085 | orchestrator | Monday 16 February 2026 05:45:32 +0000 (0:00:16.184) 0:05:12.659 ******* 2026-02-16 05:45:52.387096 | orchestrator | Pausing for 5 seconds 2026-02-16 05:45:52.387107 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:45:52.387118 | orchestrator | 2026-02-16 05:45:52.387129 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-16 05:45:52.387140 | orchestrator | Monday 16 February 2026 05:45:38 +0000 (0:00:06.192) 0:05:18.851 ******* 2026-02-16 05:45:52.387216 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:45:52.387228 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:45:52.387239 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:45:52.387249 | orchestrator | 2026-02-16 05:45:52.387260 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-16 05:45:52.387271 | orchestrator | Monday 16 February 2026 05:45:40 +0000 (0:00:01.798) 0:05:20.649 ******* 2026-02-16 05:45:52.387282 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:45:52.387292 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:45:52.387303 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:45:52.387314 | orchestrator | 2026-02-16 05:45:52.387325 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-16 05:45:52.387336 | orchestrator | Monday 16 February 2026 05:45:41 +0000 (0:00:01.616) 0:05:22.266 ******* 2026-02-16 05:45:52.387346 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:45:52.387357 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:45:52.387368 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:45:52.387378 | orchestrator | 2026-02-16 05:45:52.387389 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-16 05:45:52.387400 | orchestrator | Monday 16 February 2026 05:45:43 +0000 (0:00:01.822) 0:05:24.089 ******* 2026-02-16 05:45:52.387411 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:45:52.387422 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:45:52.387432 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:45:52.387448 | orchestrator | 2026-02-16 05:45:52.387467 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-16 05:45:52.387485 | orchestrator | Monday 16 February 2026 05:45:45 +0000 (0:00:01.766) 0:05:25.855 ******* 2026-02-16 05:45:52.387502 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:45:52.387518 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:45:52.387536 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:45:52.387554 | orchestrator | 2026-02-16 05:45:52.387573 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-16 05:45:52.387618 | orchestrator | Monday 16 February 2026 05:45:47 +0000 (0:00:01.759) 0:05:27.614 ******* 2026-02-16 05:45:52.387638 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:45:52.387656 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:45:52.387688 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:45:52.387707 | orchestrator | 2026-02-16 05:45:52.387726 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-02-16 05:45:52.387745 | orchestrator | Monday 16 February 2026 05:45:48 +0000 (0:00:01.842) 0:05:29.457 ******* 2026-02-16 05:45:52.387765 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-02-16 05:45:52.387782 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-02-16 05:45:52.387802 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-02-16 05:45:52.387814 | orchestrator | 2026-02-16 05:45:52.387825 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 05:45:52.387837 | orchestrator | testbed-node-0 : ok=50  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-16 05:45:52.387849 | orchestrator | testbed-node-1 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-16 05:45:52.387860 | orchestrator | testbed-node-2 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-16 05:45:52.387871 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 05:45:52.387890 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 05:45:52.387901 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 05:45:52.387912 | orchestrator | 2026-02-16 05:45:52.387923 | orchestrator | 2026-02-16 05:45:52.387934 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 05:45:52.387945 | orchestrator | Monday 16 February 2026 05:45:51 +0000 (0:00:02.996) 0:05:32.453 ******* 2026-02-16 05:45:52.387956 | orchestrator | =============================================================================== 2026-02-16 05:45:52.387966 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 131.64s 2026-02-16 05:45:52.387977 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.13s 2026-02-16 05:45:52.387988 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 16.19s 2026-02-16 05:45:52.387998 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 16.01s 2026-02-16 05:45:52.388009 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 15.69s 2026-02-16 05:45:52.388025 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 10.45s 2026-02-16 05:45:52.388043 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.19s 2026-02-16 05:45:52.388061 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.18s 2026-02-16 05:45:52.388079 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.59s 2026-02-16 05:45:52.388097 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 4.09s 2026-02-16 05:45:52.388114 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.68s 2026-02-16 05:45:52.388131 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.17s 2026-02-16 05:45:52.388209 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.12s 2026-02-16 05:45:52.388230 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.08s 2026-02-16 05:45:52.388248 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 3.00s 2026-02-16 05:45:52.388266 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 2.88s 2026-02-16 05:45:52.388284 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 2.78s 2026-02-16 05:45:52.388302 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.69s 2026-02-16 05:45:52.388329 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.63s 2026-02-16 05:45:52.388347 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.62s 2026-02-16 05:45:52.730867 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-16 05:45:52.730992 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-16 05:45:52.731019 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-02-16 05:45:52.740107 | orchestrator | + set -e 2026-02-16 05:45:52.740229 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-16 05:45:52.740249 | orchestrator | ++ export INTERACTIVE=false 2026-02-16 05:45:52.740608 | orchestrator | ++ INTERACTIVE=false 2026-02-16 05:45:52.740630 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-16 05:45:52.740641 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-16 05:45:52.740653 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-02-16 05:45:54.795974 | orchestrator | 2026-02-16 05:45:54 | INFO  | Task ac6f89e1-4c4c-41e9-bbc6-73312d05a71c (ceph-rolling_update) was prepared for execution. 2026-02-16 05:45:54.796076 | orchestrator | 2026-02-16 05:45:54 | INFO  | It takes a moment until task ac6f89e1-4c4c-41e9-bbc6-73312d05a71c (ceph-rolling_update) has been started and output is visible here. 2026-02-16 05:46:51.263495 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-16 05:46:51.263591 | orchestrator | 2.16.14 2026-02-16 05:46:51.263603 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-16 05:46:51.263610 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-16 05:46:51.263622 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-16 05:46:51.263628 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-16 05:46:51.263639 | orchestrator | 2026-02-16 05:46:51.263646 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-02-16 05:46:51.263651 | orchestrator | 2026-02-16 05:46:51.263657 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-02-16 05:46:51.263663 | orchestrator | Monday 16 February 2026 05:46:02 +0000 (0:00:01.078) 0:00:01.078 ******* 2026-02-16 05:46:51.263668 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-02-16 05:46:51.263673 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-02-16 05:46:51.263679 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-02-16 05:46:51.263684 | orchestrator | skipping: [localhost] 2026-02-16 05:46:51.263690 | orchestrator | 2026-02-16 05:46:51.263695 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-02-16 05:46:51.263701 | orchestrator | 2026-02-16 05:46:51.263706 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-02-16 05:46:51.263712 | orchestrator | Monday 16 February 2026 05:46:03 +0000 (0:00:01.007) 0:00:02.086 ******* 2026-02-16 05:46:51.263730 | orchestrator | ok: [testbed-node-0] => { 2026-02-16 05:46:51.263736 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-16 05:46:51.263742 | orchestrator | } 2026-02-16 05:46:51.263748 | orchestrator | ok: [testbed-node-1] => { 2026-02-16 05:46:51.263753 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-16 05:46:51.263759 | orchestrator | } 2026-02-16 05:46:51.263764 | orchestrator | ok: [testbed-node-2] => { 2026-02-16 05:46:51.263770 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-16 05:46:51.263775 | orchestrator | } 2026-02-16 05:46:51.263780 | orchestrator | ok: [testbed-node-3] => { 2026-02-16 05:46:51.263786 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-16 05:46:51.263809 | orchestrator | } 2026-02-16 05:46:51.263814 | orchestrator | ok: [testbed-node-4] => { 2026-02-16 05:46:51.263820 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-16 05:46:51.263825 | orchestrator | } 2026-02-16 05:46:51.263831 | orchestrator | ok: [testbed-node-5] => { 2026-02-16 05:46:51.263836 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-16 05:46:51.263841 | orchestrator | } 2026-02-16 05:46:51.263847 | orchestrator | ok: [testbed-manager] => { 2026-02-16 05:46:51.263852 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-16 05:46:51.263857 | orchestrator | } 2026-02-16 05:46:51.263863 | orchestrator | 2026-02-16 05:46:51.263868 | orchestrator | TASK [Gather facts] ************************************************************ 2026-02-16 05:46:51.263874 | orchestrator | Monday 16 February 2026 05:46:05 +0000 (0:00:01.926) 0:00:04.013 ******* 2026-02-16 05:46:51.263879 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:46:51.263885 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:46:51.263890 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:46:51.263896 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:46:51.263901 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:46:51.263906 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:46:51.263911 | orchestrator | ok: [testbed-manager] 2026-02-16 05:46:51.263917 | orchestrator | 2026-02-16 05:46:51.263922 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-02-16 05:46:51.263928 | orchestrator | Monday 16 February 2026 05:46:08 +0000 (0:00:03.930) 0:00:07.943 ******* 2026-02-16 05:46:51.263933 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 05:46:51.263939 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-16 05:46:51.263944 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 05:46:51.263949 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-16 05:46:51.263955 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-16 05:46:51.263960 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-16 05:46:51.263966 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 05:46:51.263971 | orchestrator | 2026-02-16 05:46:51.263976 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-02-16 05:46:51.263982 | orchestrator | Monday 16 February 2026 05:46:38 +0000 (0:00:29.761) 0:00:37.704 ******* 2026-02-16 05:46:51.263987 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:46:51.263993 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:46:51.263998 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:46:51.264003 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:46:51.264009 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:46:51.264014 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:46:51.264019 | orchestrator | ok: [testbed-manager] 2026-02-16 05:46:51.264025 | orchestrator | 2026-02-16 05:46:51.264030 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-16 05:46:51.264036 | orchestrator | Monday 16 February 2026 05:46:39 +0000 (0:00:00.917) 0:00:38.622 ******* 2026-02-16 05:46:51.264054 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-16 05:46:51.264063 | orchestrator | 2026-02-16 05:46:51.264070 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-16 05:46:51.264076 | orchestrator | Monday 16 February 2026 05:46:41 +0000 (0:00:01.868) 0:00:40.490 ******* 2026-02-16 05:46:51.264082 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:46:51.264089 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:46:51.264095 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:46:51.264101 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:46:51.264112 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:46:51.264119 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:46:51.264125 | orchestrator | ok: [testbed-manager] 2026-02-16 05:46:51.264154 | orchestrator | 2026-02-16 05:46:51.264161 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-16 05:46:51.264167 | orchestrator | Monday 16 February 2026 05:46:42 +0000 (0:00:01.314) 0:00:41.804 ******* 2026-02-16 05:46:51.264173 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:46:51.264179 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:46:51.264185 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:46:51.264191 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:46:51.264198 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:46:51.264204 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:46:51.264210 | orchestrator | ok: [testbed-manager] 2026-02-16 05:46:51.264216 | orchestrator | 2026-02-16 05:46:51.264222 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-16 05:46:51.264229 | orchestrator | Monday 16 February 2026 05:46:43 +0000 (0:00:00.775) 0:00:42.579 ******* 2026-02-16 05:46:51.264235 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:46:51.264241 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:46:51.264247 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:46:51.264253 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:46:51.264259 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:46:51.264265 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:46:51.264272 | orchestrator | ok: [testbed-manager] 2026-02-16 05:46:51.264278 | orchestrator | 2026-02-16 05:46:51.264284 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-16 05:46:51.264296 | orchestrator | Monday 16 February 2026 05:46:44 +0000 (0:00:01.376) 0:00:43.956 ******* 2026-02-16 05:46:51.264301 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:46:51.264307 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:46:51.264312 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:46:51.264317 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:46:51.264323 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:46:51.264328 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:46:51.264333 | orchestrator | ok: [testbed-manager] 2026-02-16 05:46:51.264339 | orchestrator | 2026-02-16 05:46:51.264344 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-16 05:46:51.264349 | orchestrator | Monday 16 February 2026 05:46:45 +0000 (0:00:00.788) 0:00:44.745 ******* 2026-02-16 05:46:51.264355 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:46:51.264360 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:46:51.264366 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:46:51.264371 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:46:51.264376 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:46:51.264382 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:46:51.264387 | orchestrator | ok: [testbed-manager] 2026-02-16 05:46:51.264392 | orchestrator | 2026-02-16 05:46:51.264398 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-16 05:46:51.264403 | orchestrator | Monday 16 February 2026 05:46:46 +0000 (0:00:00.962) 0:00:45.708 ******* 2026-02-16 05:46:51.264409 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:46:51.264414 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:46:51.264419 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:46:51.264424 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:46:51.264430 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:46:51.264435 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:46:51.264441 | orchestrator | ok: [testbed-manager] 2026-02-16 05:46:51.264446 | orchestrator | 2026-02-16 05:46:51.264451 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-16 05:46:51.264457 | orchestrator | Monday 16 February 2026 05:46:47 +0000 (0:00:00.778) 0:00:46.486 ******* 2026-02-16 05:46:51.264462 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:46:51.264468 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:46:51.264473 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:46:51.264478 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:46:51.264488 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:46:51.264493 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:46:51.264499 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:46:51.264504 | orchestrator | 2026-02-16 05:46:51.264509 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-16 05:46:51.264515 | orchestrator | Monday 16 February 2026 05:46:48 +0000 (0:00:01.000) 0:00:47.486 ******* 2026-02-16 05:46:51.264520 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:46:51.264525 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:46:51.264531 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:46:51.264536 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:46:51.264541 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:46:51.264547 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:46:51.264552 | orchestrator | ok: [testbed-manager] 2026-02-16 05:46:51.264557 | orchestrator | 2026-02-16 05:46:51.264563 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-16 05:46:51.264568 | orchestrator | Monday 16 February 2026 05:46:49 +0000 (0:00:00.708) 0:00:48.195 ******* 2026-02-16 05:46:51.264574 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 05:46:51.264579 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 05:46:51.264584 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 05:46:51.264590 | orchestrator | 2026-02-16 05:46:51.264595 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-16 05:46:51.264601 | orchestrator | Monday 16 February 2026 05:46:50 +0000 (0:00:01.138) 0:00:49.333 ******* 2026-02-16 05:46:51.264606 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:46:51.264611 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:46:51.264617 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:46:51.264622 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:46:51.264627 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:46:51.264633 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:46:51.264638 | orchestrator | ok: [testbed-manager] 2026-02-16 05:46:51.264643 | orchestrator | 2026-02-16 05:46:51.264649 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-16 05:46:51.264658 | orchestrator | Monday 16 February 2026 05:46:51 +0000 (0:00:00.895) 0:00:50.229 ******* 2026-02-16 05:47:02.730463 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 05:47:02.730558 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 05:47:02.730569 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 05:47:02.730576 | orchestrator | 2026-02-16 05:47:02.730584 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-16 05:47:02.730592 | orchestrator | Monday 16 February 2026 05:46:53 +0000 (0:00:02.259) 0:00:52.488 ******* 2026-02-16 05:47:02.730599 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-16 05:47:02.730606 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-16 05:47:02.730613 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-16 05:47:02.730619 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:02.730625 | orchestrator | 2026-02-16 05:47:02.730632 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-16 05:47:02.730638 | orchestrator | Monday 16 February 2026 05:46:53 +0000 (0:00:00.403) 0:00:52.892 ******* 2026-02-16 05:47:02.730646 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-16 05:47:02.730654 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-16 05:47:02.730688 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-16 05:47:02.730695 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:02.730702 | orchestrator | 2026-02-16 05:47:02.730708 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-16 05:47:02.730716 | orchestrator | Monday 16 February 2026 05:46:54 +0000 (0:00:00.877) 0:00:53.769 ******* 2026-02-16 05:47:02.730724 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:02.730732 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:02.730739 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:02.730745 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:02.730751 | orchestrator | 2026-02-16 05:47:02.730757 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-16 05:47:02.730763 | orchestrator | Monday 16 February 2026 05:46:54 +0000 (0:00:00.196) 0:00:53.965 ******* 2026-02-16 05:47:02.730835 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'c4764146f42e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-16 05:46:51.896540', 'end': '2026-02-16 05:46:51.947186', 'delta': '0:00:00.050646', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c4764146f42e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-16 05:47:02.730862 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '8a5d26661ef8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-16 05:46:52.723814', 'end': '2026-02-16 05:46:52.770072', 'delta': '0:00:00.046258', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8a5d26661ef8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-16 05:47:02.730871 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '6720fcec1b21', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-16 05:46:53.304668', 'end': '2026-02-16 05:46:53.351587', 'delta': '0:00:00.046919', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6720fcec1b21'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-16 05:47:02.730883 | orchestrator | 2026-02-16 05:47:02.730889 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-16 05:47:02.730895 | orchestrator | Monday 16 February 2026 05:46:55 +0000 (0:00:00.425) 0:00:54.390 ******* 2026-02-16 05:47:02.730900 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:47:02.730906 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:47:02.730911 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:47:02.730916 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:47:02.730922 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:47:02.730927 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:47:02.730933 | orchestrator | ok: [testbed-manager] 2026-02-16 05:47:02.730938 | orchestrator | 2026-02-16 05:47:02.730943 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-16 05:47:02.730949 | orchestrator | Monday 16 February 2026 05:46:56 +0000 (0:00:00.884) 0:00:55.275 ******* 2026-02-16 05:47:02.730955 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:02.730960 | orchestrator | 2026-02-16 05:47:02.730965 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-16 05:47:02.730971 | orchestrator | Monday 16 February 2026 05:46:56 +0000 (0:00:00.233) 0:00:55.509 ******* 2026-02-16 05:47:02.730976 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:47:02.730982 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:47:02.730987 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:47:02.730992 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:47:02.730998 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:47:02.731003 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:47:02.731009 | orchestrator | ok: [testbed-manager] 2026-02-16 05:47:02.731016 | orchestrator | 2026-02-16 05:47:02.731024 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-16 05:47:02.731033 | orchestrator | Monday 16 February 2026 05:46:57 +0000 (0:00:01.017) 0:00:56.526 ******* 2026-02-16 05:47:02.731042 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:47:02.731049 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-16 05:47:02.731056 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-16 05:47:02.731063 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-16 05:47:02.731069 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-16 05:47:02.731076 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-16 05:47:02.731083 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-16 05:47:02.731089 | orchestrator | 2026-02-16 05:47:02.731096 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-16 05:47:02.731103 | orchestrator | Monday 16 February 2026 05:46:59 +0000 (0:00:02.399) 0:00:58.926 ******* 2026-02-16 05:47:02.731109 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:47:02.731116 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:47:02.731123 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:47:02.731153 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:47:02.731162 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:47:02.731170 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:47:02.731178 | orchestrator | ok: [testbed-manager] 2026-02-16 05:47:02.731185 | orchestrator | 2026-02-16 05:47:02.731192 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-16 05:47:02.731199 | orchestrator | Monday 16 February 2026 05:47:00 +0000 (0:00:01.027) 0:00:59.953 ******* 2026-02-16 05:47:02.731205 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:02.731219 | orchestrator | 2026-02-16 05:47:02.731227 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-16 05:47:02.731236 | orchestrator | Monday 16 February 2026 05:47:01 +0000 (0:00:00.158) 0:01:00.111 ******* 2026-02-16 05:47:02.731245 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:02.731252 | orchestrator | 2026-02-16 05:47:02.731260 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-16 05:47:02.731268 | orchestrator | Monday 16 February 2026 05:47:01 +0000 (0:00:00.229) 0:01:00.341 ******* 2026-02-16 05:47:02.731275 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:02.731280 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:02.731287 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:02.731292 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:02.731298 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:02.731313 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:08.127016 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:08.127195 | orchestrator | 2026-02-16 05:47:08.127226 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-16 05:47:08.127245 | orchestrator | Monday 16 February 2026 05:47:02 +0000 (0:00:01.355) 0:01:01.696 ******* 2026-02-16 05:47:08.127262 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:08.127280 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:08.127296 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:08.127310 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:08.127321 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:08.127330 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:08.127340 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:08.127350 | orchestrator | 2026-02-16 05:47:08.127360 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-16 05:47:08.127370 | orchestrator | Monday 16 February 2026 05:47:03 +0000 (0:00:00.785) 0:01:02.481 ******* 2026-02-16 05:47:08.127380 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:08.127389 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:08.127398 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:08.127408 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:08.127417 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:08.127426 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:08.127436 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:08.127445 | orchestrator | 2026-02-16 05:47:08.127455 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-16 05:47:08.127464 | orchestrator | Monday 16 February 2026 05:47:04 +0000 (0:00:00.956) 0:01:03.437 ******* 2026-02-16 05:47:08.127474 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:08.127483 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:08.127493 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:08.127502 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:08.127511 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:08.127537 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:08.127549 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:08.127561 | orchestrator | 2026-02-16 05:47:08.127573 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-16 05:47:08.127584 | orchestrator | Monday 16 February 2026 05:47:05 +0000 (0:00:00.762) 0:01:04.200 ******* 2026-02-16 05:47:08.127595 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:08.127606 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:08.127617 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:08.127628 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:08.127639 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:08.127650 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:08.127662 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:08.127677 | orchestrator | 2026-02-16 05:47:08.127693 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-16 05:47:08.127720 | orchestrator | Monday 16 February 2026 05:47:06 +0000 (0:00:00.996) 0:01:05.196 ******* 2026-02-16 05:47:08.127781 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:08.127797 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:08.127811 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:08.127825 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:08.127840 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:08.127855 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:08.127870 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:08.127888 | orchestrator | 2026-02-16 05:47:08.127904 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-16 05:47:08.127921 | orchestrator | Monday 16 February 2026 05:47:06 +0000 (0:00:00.722) 0:01:05.919 ******* 2026-02-16 05:47:08.127936 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:08.127952 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:08.127968 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:08.127983 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:08.128000 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:08.128015 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:08.128031 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:08.128047 | orchestrator | 2026-02-16 05:47:08.128065 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-16 05:47:08.128083 | orchestrator | Monday 16 February 2026 05:47:07 +0000 (0:00:00.939) 0:01:06.859 ******* 2026-02-16 05:47:08.128102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.128123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.128167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.128216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-26-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-16 05:47:08.128238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.128266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.128300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.128322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2335e156', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part16', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part14', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part15', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part1', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 05:47:08.128356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.301812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.301912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.301964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.301978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.301992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-16 05:47:08.302007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.302078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.302091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.302231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd4296cc6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 05:47:08.302262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.302274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.302286 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:08.302299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.302310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.302322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.302342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-28-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-16 05:47:08.525477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.525660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.525694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.525721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c7144733', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part16', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part14', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part15', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part1', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 05:47:08.525746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.525791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.525830 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:08.525856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.525888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--50d7a967--e09e--512a--aa83--aa9bbdf9ab74-osd--block--50d7a967--e09e--512a--aa83--aa9bbdf9ab74', 'dm-uuid-LVM-2dhVtclKCjfsjMcDe2D03F1qrxXtffQzYuMeigkCrxOY0hLAH1gOwaoo3bAqwsvb'], 'uuids': ['b3748582-e358-45b0-b8aa-f881226dc8da'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '51f5f49d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['YuMeig-kCrx-OY0h-LAH1-gOwa-oo3b-Aqwsvb']}})  2026-02-16 05:47:08.525913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_843bc551-f5ad-4319-82ad-d411f9295fd2', 'scsi-SQEMU_QEMU_HARDDISK_843bc551-f5ad-4319-82ad-d411f9295fd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '843bc551', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 05:47:08.525935 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1ITxS0-SFz0-FdlF-VzSF-Uv8m-y10A-m0caaJ', 'scsi-0QEMU_QEMU_HARDDISK_0693774e-e893-4e7b-949f-071f2326db51', 'scsi-SQEMU_QEMU_HARDDISK_0693774e-e893-4e7b-949f-071f2326db51'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0693774e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2f9a42f5--b575--5e11--9555--a5550e2fae1e-osd--block--2f9a42f5--b575--5e11--9555--a5550e2fae1e']}})  2026-02-16 05:47:08.525956 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.525977 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:08.526002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.526120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-22-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-16 05:47:08.598637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.598751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-c3eDMW-CQjE-FL46-zd8q-XZ7h-Wvk7-L0nQAD', 'dm-uuid-CRYPT-LUKS2-011f269142c14738a165566bf449f017-c3eDMW-CQjE-FL46-zd8q-XZ7h-Wvk7-L0nQAD'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-16 05:47:08.598769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.598782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2f9a42f5--b575--5e11--9555--a5550e2fae1e-osd--block--2f9a42f5--b575--5e11--9555--a5550e2fae1e', 'dm-uuid-LVM-F4bqzAKmgcv4nzZjVJIDDLRdBkjdiY7Ac3eDMWCQjEFL46zd8qXZ7hWvk7L0nQAD'], 'uuids': ['011f2691-42c1-4738-a165-566bf449f017'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0693774e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['c3eDMW-CQjE-FL46-zd8q-XZ7h-Wvk7-L0nQAD']}})  2026-02-16 05:47:08.598796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UNvti2-beMu-mtun-nkoB-anD7-j3vD-BO56Wb', 'scsi-0QEMU_QEMU_HARDDISK_51f5f49d-415a-48de-982e-531dff143e5e', 'scsi-SQEMU_QEMU_HARDDISK_51f5f49d-415a-48de-982e-531dff143e5e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '51f5f49d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--50d7a967--e09e--512a--aa83--aa9bbdf9ab74-osd--block--50d7a967--e09e--512a--aa83--aa9bbdf9ab74']}})  2026-02-16 05:47:08.598809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.598840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.598884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2168da4d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part16', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part14', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part15', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part1', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 05:47:08.598899 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--3ec6a818--dc71--5cb4--ac47--83f209d09bca-osd--block--3ec6a818--dc71--5cb4--ac47--83f209d09bca', 'dm-uuid-LVM-IKNT1aRSRRXmVnhjGHBWtObOyhGZoCrKxknn5549qE5Iv1X6exAA2Hq2RDcxdb2r'], 'uuids': ['5964190e-3947-423a-9774-0a2e895129b4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0857a7ec', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['xknn55-49qE-5Iv1-X6ex-AA2H-q2RD-cxdb2r']}})  2026-02-16 05:47:08.598911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.598923 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57ea9400-2602-4802-b9b7-802a488f4705', 'scsi-SQEMU_QEMU_HARDDISK_57ea9400-2602-4802-b9b7-802a488f4705'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '57ea9400', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 05:47:08.598950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.598974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-W4T77R-WX0u-2wiK-0VwS-pHXw-eigq-78SyVp', 'scsi-0QEMU_QEMU_HARDDISK_769208b9-3be0-45fa-bf10-39ffe30cf829', 'scsi-SQEMU_QEMU_HARDDISK_769208b9-3be0-45fa-bf10-39ffe30cf829'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '769208b9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d-osd--block--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d']}})  2026-02-16 05:47:08.733189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-YuMeig-kCrx-OY0h-LAH1-gOwa-oo3b-Aqwsvb', 'dm-uuid-CRYPT-LUKS2-b3748582e35845b0b8aaf881226dc8da-YuMeig-kCrx-OY0h-LAH1-gOwa-oo3b-Aqwsvb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-16 05:47:08.733320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.733349 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.733370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-26-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-16 05:47:08.733389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.733438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-qYYWm2-N1bk-S9UT-0Dip-02Aj-Kcu4-0awaVv', 'dm-uuid-CRYPT-LUKS2-7b6d91351d3c4adabcb6913cd16f15c7-qYYWm2-N1bk-S9UT-0Dip-02Aj-Kcu4-0awaVv'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-16 05:47:08.733460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.733522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d-osd--block--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d', 'dm-uuid-LVM-sWHkNGoua6AD2gtW0aHfBT1ggS3B4VVdqYYWm2N1bkS9UT0Dip02AjKcu40awaVv'], 'uuids': ['7b6d9135-1d3c-4ada-bcb6-913cd16f15c7'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '769208b9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['qYYWm2-N1bk-S9UT-0Dip-02Aj-Kcu4-0awaVv']}})  2026-02-16 05:47:08.733540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ezeU5X-kiVi-Bwdm-EJU8-vTMX-Ty8v-7odRXz', 'scsi-0QEMU_QEMU_HARDDISK_0857a7ec-98e0-4b3a-95cc-40567a4f4a8e', 'scsi-SQEMU_QEMU_HARDDISK_0857a7ec-98e0-4b3a-95cc-40567a4f4a8e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0857a7ec', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--3ec6a818--dc71--5cb4--ac47--83f209d09bca-osd--block--3ec6a818--dc71--5cb4--ac47--83f209d09bca']}})  2026-02-16 05:47:08.733553 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:08.733565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.733579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '66717551', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part16', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part14', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part15', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part1', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 05:47:08.733599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.733622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.815432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-xknn55-49qE-5Iv1-X6ex-AA2H-q2RD-cxdb2r', 'dm-uuid-CRYPT-LUKS2-5964190e3947423a97740a2e895129b4-xknn55-49qE-5Iv1-X6ex-AA2H-q2RD-cxdb2r'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-16 05:47:08.815529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.815541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f418f421--cc32--53ce--b421--39353fe37c02-osd--block--f418f421--cc32--53ce--b421--39353fe37c02', 'dm-uuid-LVM-fuzYkTDOD1mzGPTtEVy3HIfkbUT8vrouEUngu6j9gDpOiJ09icmXLIesmhVGIdAG'], 'uuids': ['ec5126ba-6809-43bf-b597-f55a08b20d1f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '560fea90', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['EUngu6-j9gD-pOiJ-09ic-mXLI-esmh-VGIdAG']}})  2026-02-16 05:47:08.815570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22f5929b-f2f1-4a02-b80c-ace7dc1afd6d', 'scsi-SQEMU_QEMU_HARDDISK_22f5929b-f2f1-4a02-b80c-ace7dc1afd6d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '22f5929b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 05:47:08.815579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-z25UVR-mt7s-2TOu-f4Na-2m38-OcPQ-rSbkPq', 'scsi-0QEMU_QEMU_HARDDISK_864a7dfe-a330-4dca-9b66-49ca9e8841e5', 'scsi-SQEMU_QEMU_HARDDISK_864a7dfe-a330-4dca-9b66-49ca9e8841e5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '864a7dfe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--10a0662d--59e9--5a43--af5c--1b6d671b7fa5-osd--block--10a0662d--59e9--5a43--af5c--1b6d671b7fa5']}})  2026-02-16 05:47:08.815586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.815608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.815630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-30-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-16 05:47:08.815637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.815643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-dSLmrZ-CRVK-IRBO-jrNi-ck0K-roaJ-NYuYcA', 'dm-uuid-CRYPT-LUKS2-ad5c8a1c7cef458c9644d8140426285b-dSLmrZ-CRVK-IRBO-jrNi-ck0K-roaJ-NYuYcA'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-16 05:47:08.815655 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.815661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--10a0662d--59e9--5a43--af5c--1b6d671b7fa5-osd--block--10a0662d--59e9--5a43--af5c--1b6d671b7fa5', 'dm-uuid-LVM-SWv31bXFKxTO3vyaMihj1WLbgzWvzkgjdSLmrZCRVKIRBOjrNick0KroaJNYuYcA'], 'uuids': ['ad5c8a1c-7cef-458c-9644-d8140426285b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '864a7dfe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['dSLmrZ-CRVK-IRBO-jrNi-ck0K-roaJ-NYuYcA']}})  2026-02-16 05:47:08.815668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Qrttlw-98AS-fQrI-yUr1-wyrI-2oj6-dafTom', 'scsi-0QEMU_QEMU_HARDDISK_560fea90-96fc-4e98-a264-4fc86723b569', 'scsi-SQEMU_QEMU_HARDDISK_560fea90-96fc-4e98-a264-4fc86723b569'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '560fea90', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f418f421--cc32--53ce--b421--39353fe37c02-osd--block--f418f421--cc32--53ce--b421--39353fe37c02']}})  2026-02-16 05:47:08.815678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:08.815694 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f566252a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part16', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part14', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part15', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part1', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 05:47:09.451738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:09.451866 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:09.451893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:09.451918 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-EUngu6-j9gD-pOiJ-09ic-mXLI-esmh-VGIdAG', 'dm-uuid-CRYPT-LUKS2-ec5126ba680943bfb597f55a08b20d1f-EUngu6-j9gD-pOiJ-09ic-mXLI-esmh-VGIdAG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-16 05:47:09.451941 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:09.451983 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:09.452004 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:09.452025 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:09.452044 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-53-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-16 05:47:09.452094 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:09.452178 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:09.452202 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:09.452238 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f', 'scsi-SQEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f62a15e9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part16', 'scsi-SQEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part14', 'scsi-SQEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part15', 'scsi-SQEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part1', 'scsi-SQEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 05:47:09.452298 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:09.452320 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:47:09.452353 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:09.452376 | orchestrator | 2026-02-16 05:47:09.452398 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-16 05:47:09.452420 | orchestrator | Monday 16 February 2026 05:47:09 +0000 (0:00:01.174) 0:01:08.033 ******* 2026-02-16 05:47:09.452454 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.597627 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.597730 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.597771 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-26-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.597786 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.597820 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.597831 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.597876 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2335e156', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part16', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part14', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part15', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part1', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.597892 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.597911 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.597922 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.597942 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.782212 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.782321 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.782336 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.782366 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.782376 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.782412 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd4296cc6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.782425 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.782441 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.782451 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:09.782463 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.782473 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.782489 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.913714 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-28-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.913784 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.913809 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.913816 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.913841 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c7144733', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part16', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part14', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part15', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part1', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.913850 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.913860 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.913866 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:09.913874 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.913882 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--50d7a967--e09e--512a--aa83--aa9bbdf9ab74-osd--block--50d7a967--e09e--512a--aa83--aa9bbdf9ab74', 'dm-uuid-LVM-2dhVtclKCjfsjMcDe2D03F1qrxXtffQzYuMeigkCrxOY0hLAH1gOwaoo3bAqwsvb'], 'uuids': ['b3748582-e358-45b0-b8aa-f881226dc8da'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '51f5f49d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['YuMeig-kCrx-OY0h-LAH1-gOwa-oo3b-Aqwsvb']}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:09.913893 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_843bc551-f5ad-4319-82ad-d411f9295fd2', 'scsi-SQEMU_QEMU_HARDDISK_843bc551-f5ad-4319-82ad-d411f9295fd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '843bc551', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.042444 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1ITxS0-SFz0-FdlF-VzSF-Uv8m-y10A-m0caaJ', 'scsi-0QEMU_QEMU_HARDDISK_0693774e-e893-4e7b-949f-071f2326db51', 'scsi-SQEMU_QEMU_HARDDISK_0693774e-e893-4e7b-949f-071f2326db51'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0693774e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2f9a42f5--b575--5e11--9555--a5550e2fae1e-osd--block--2f9a42f5--b575--5e11--9555--a5550e2fae1e']}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.042561 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.042577 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:10.042589 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.042601 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-22-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.042612 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.042644 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-c3eDMW-CQjE-FL46-zd8q-XZ7h-Wvk7-L0nQAD', 'dm-uuid-CRYPT-LUKS2-011f269142c14738a165566bf449f017-c3eDMW-CQjE-FL46-zd8q-XZ7h-Wvk7-L0nQAD'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.042669 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.042681 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2f9a42f5--b575--5e11--9555--a5550e2fae1e-osd--block--2f9a42f5--b575--5e11--9555--a5550e2fae1e', 'dm-uuid-LVM-F4bqzAKmgcv4nzZjVJIDDLRdBkjdiY7Ac3eDMWCQjEFL46zd8qXZ7hWvk7L0nQAD'], 'uuids': ['011f2691-42c1-4738-a165-566bf449f017'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0693774e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['c3eDMW-CQjE-FL46-zd8q-XZ7h-Wvk7-L0nQAD']}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.042692 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UNvti2-beMu-mtun-nkoB-anD7-j3vD-BO56Wb', 'scsi-0QEMU_QEMU_HARDDISK_51f5f49d-415a-48de-982e-531dff143e5e', 'scsi-SQEMU_QEMU_HARDDISK_51f5f49d-415a-48de-982e-531dff143e5e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '51f5f49d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--50d7a967--e09e--512a--aa83--aa9bbdf9ab74-osd--block--50d7a967--e09e--512a--aa83--aa9bbdf9ab74']}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.042703 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.042714 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.042735 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--3ec6a818--dc71--5cb4--ac47--83f209d09bca-osd--block--3ec6a818--dc71--5cb4--ac47--83f209d09bca', 'dm-uuid-LVM-IKNT1aRSRRXmVnhjGHBWtObOyhGZoCrKxknn5549qE5Iv1X6exAA2Hq2RDcxdb2r'], 'uuids': ['5964190e-3947-423a-9774-0a2e895129b4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0857a7ec', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['xknn55-49qE-5Iv1-X6ex-AA2H-q2RD-cxdb2r']}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.109640 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2168da4d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part16', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part14', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part15', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part1', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.109760 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57ea9400-2602-4802-b9b7-802a488f4705', 'scsi-SQEMU_QEMU_HARDDISK_57ea9400-2602-4802-b9b7-802a488f4705'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '57ea9400', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.109805 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.109877 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-W4T77R-WX0u-2wiK-0VwS-pHXw-eigq-78SyVp', 'scsi-0QEMU_QEMU_HARDDISK_769208b9-3be0-45fa-bf10-39ffe30cf829', 'scsi-SQEMU_QEMU_HARDDISK_769208b9-3be0-45fa-bf10-39ffe30cf829'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '769208b9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d-osd--block--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d']}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.109900 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.109919 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.109939 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-YuMeig-kCrx-OY0h-LAH1-gOwa-oo3b-Aqwsvb', 'dm-uuid-CRYPT-LUKS2-b3748582e35845b0b8aaf881226dc8da-YuMeig-kCrx-OY0h-LAH1-gOwa-oo3b-Aqwsvb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.109958 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.109993 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-26-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.110097 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.245681 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-qYYWm2-N1bk-S9UT-0Dip-02Aj-Kcu4-0awaVv', 'dm-uuid-CRYPT-LUKS2-7b6d91351d3c4adabcb6913cd16f15c7-qYYWm2-N1bk-S9UT-0Dip-02Aj-Kcu4-0awaVv'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.245784 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:10.245803 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.245818 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d-osd--block--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d', 'dm-uuid-LVM-sWHkNGoua6AD2gtW0aHfBT1ggS3B4VVdqYYWm2N1bkS9UT0Dip02AjKcu40awaVv'], 'uuids': ['7b6d9135-1d3c-4ada-bcb6-913cd16f15c7'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '769208b9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['qYYWm2-N1bk-S9UT-0Dip-02Aj-Kcu4-0awaVv']}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.245868 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ezeU5X-kiVi-Bwdm-EJU8-vTMX-Ty8v-7odRXz', 'scsi-0QEMU_QEMU_HARDDISK_0857a7ec-98e0-4b3a-95cc-40567a4f4a8e', 'scsi-SQEMU_QEMU_HARDDISK_0857a7ec-98e0-4b3a-95cc-40567a4f4a8e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0857a7ec', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--3ec6a818--dc71--5cb4--ac47--83f209d09bca-osd--block--3ec6a818--dc71--5cb4--ac47--83f209d09bca']}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.245884 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.245930 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '66717551', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part16', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part14', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part15', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part1', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.245953 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.245992 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.246011 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.246114 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-xknn55-49qE-5Iv1-X6ex-AA2H-q2RD-cxdb2r', 'dm-uuid-CRYPT-LUKS2-5964190e3947423a97740a2e895129b4-xknn55-49qE-5Iv1-X6ex-AA2H-q2RD-cxdb2r'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.356902 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f418f421--cc32--53ce--b421--39353fe37c02-osd--block--f418f421--cc32--53ce--b421--39353fe37c02', 'dm-uuid-LVM-fuzYkTDOD1mzGPTtEVy3HIfkbUT8vrouEUngu6j9gDpOiJ09icmXLIesmhVGIdAG'], 'uuids': ['ec5126ba-6809-43bf-b597-f55a08b20d1f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '560fea90', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['EUngu6-j9gD-pOiJ-09ic-mXLI-esmh-VGIdAG']}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.356993 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22f5929b-f2f1-4a02-b80c-ace7dc1afd6d', 'scsi-SQEMU_QEMU_HARDDISK_22f5929b-f2f1-4a02-b80c-ace7dc1afd6d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '22f5929b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.357040 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-z25UVR-mt7s-2TOu-f4Na-2m38-OcPQ-rSbkPq', 'scsi-0QEMU_QEMU_HARDDISK_864a7dfe-a330-4dca-9b66-49ca9e8841e5', 'scsi-SQEMU_QEMU_HARDDISK_864a7dfe-a330-4dca-9b66-49ca9e8841e5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '864a7dfe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--10a0662d--59e9--5a43--af5c--1b6d671b7fa5-osd--block--10a0662d--59e9--5a43--af5c--1b6d671b7fa5']}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.357054 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.357063 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.357086 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-30-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.357096 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.357105 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-dSLmrZ-CRVK-IRBO-jrNi-ck0K-roaJ-NYuYcA', 'dm-uuid-CRYPT-LUKS2-ad5c8a1c7cef458c9644d8140426285b-dSLmrZ-CRVK-IRBO-jrNi-ck0K-roaJ-NYuYcA'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.357123 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.357163 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--10a0662d--59e9--5a43--af5c--1b6d671b7fa5-osd--block--10a0662d--59e9--5a43--af5c--1b6d671b7fa5', 'dm-uuid-LVM-SWv31bXFKxTO3vyaMihj1WLbgzWvzkgjdSLmrZCRVKIRBOjrNick0KroaJNYuYcA'], 'uuids': ['ad5c8a1c-7cef-458c-9644-d8140426285b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '864a7dfe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['dSLmrZ-CRVK-IRBO-jrNi-ck0K-roaJ-NYuYcA']}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.357188 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Qrttlw-98AS-fQrI-yUr1-wyrI-2oj6-dafTom', 'scsi-0QEMU_QEMU_HARDDISK_560fea90-96fc-4e98-a264-4fc86723b569', 'scsi-SQEMU_QEMU_HARDDISK_560fea90-96fc-4e98-a264-4fc86723b569'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '560fea90', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f418f421--cc32--53ce--b421--39353fe37c02-osd--block--f418f421--cc32--53ce--b421--39353fe37c02']}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.490370 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.490521 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f566252a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part16', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part14', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part15', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part1', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.490566 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.490578 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:10.490606 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.490617 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-EUngu6-j9gD-pOiJ-09ic-mXLI-esmh-VGIdAG', 'dm-uuid-CRYPT-LUKS2-ec5126ba680943bfb597f55a08b20d1f-EUngu6-j9gD-pOiJ-09ic-mXLI-esmh-VGIdAG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.490634 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:10.490643 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.490660 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.490669 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.490679 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-53-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:10.490695 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:19.002713 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:19.002855 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:19.002889 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f', 'scsi-SQEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f62a15e9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part16', 'scsi-SQEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part14', 'scsi-SQEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part15', 'scsi-SQEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part1', 'scsi-SQEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:19.002923 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:19.002936 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:47:19.002956 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:19.002970 | orchestrator | 2026-02-16 05:47:19.002982 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-16 05:47:19.002994 | orchestrator | Monday 16 February 2026 05:47:10 +0000 (0:00:01.537) 0:01:09.570 ******* 2026-02-16 05:47:19.003005 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:47:19.003016 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:47:19.003027 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:47:19.003038 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:47:19.003048 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:47:19.003059 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:47:19.003069 | orchestrator | ok: [testbed-manager] 2026-02-16 05:47:19.003080 | orchestrator | 2026-02-16 05:47:19.003091 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-16 05:47:19.003102 | orchestrator | Monday 16 February 2026 05:47:11 +0000 (0:00:01.359) 0:01:10.930 ******* 2026-02-16 05:47:19.003113 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:47:19.003159 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:47:19.003173 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:47:19.003184 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:47:19.003194 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:47:19.003205 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:47:19.003216 | orchestrator | ok: [testbed-manager] 2026-02-16 05:47:19.003314 | orchestrator | 2026-02-16 05:47:19.003336 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-16 05:47:19.003360 | orchestrator | Monday 16 February 2026 05:47:12 +0000 (0:00:00.778) 0:01:11.708 ******* 2026-02-16 05:47:19.003380 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:47:19.003396 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:47:19.003409 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:47:19.003421 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:47:19.003434 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:19.003448 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:47:19.003476 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:47:19.003489 | orchestrator | 2026-02-16 05:47:19.003502 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-16 05:47:19.003514 | orchestrator | Monday 16 February 2026 05:47:13 +0000 (0:00:01.233) 0:01:12.942 ******* 2026-02-16 05:47:19.003527 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:19.003540 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:19.003553 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:19.003566 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:19.003578 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:19.003590 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:19.003603 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:19.003616 | orchestrator | 2026-02-16 05:47:19.003629 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-16 05:47:19.003641 | orchestrator | Monday 16 February 2026 05:47:14 +0000 (0:00:00.873) 0:01:13.816 ******* 2026-02-16 05:47:19.003652 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:19.003663 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:19.003673 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:19.003684 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:19.003694 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:19.003705 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:19.003716 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-02-16 05:47:19.003726 | orchestrator | 2026-02-16 05:47:19.003738 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-16 05:47:19.003758 | orchestrator | Monday 16 February 2026 05:47:16 +0000 (0:00:01.588) 0:01:15.404 ******* 2026-02-16 05:47:19.003768 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:19.003779 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:19.003789 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:19.003800 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:19.003810 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:19.003821 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:19.003831 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:19.003842 | orchestrator | 2026-02-16 05:47:19.003853 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-16 05:47:19.003863 | orchestrator | Monday 16 February 2026 05:47:17 +0000 (0:00:00.741) 0:01:16.146 ******* 2026-02-16 05:47:19.003874 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 05:47:19.003885 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-16 05:47:19.003895 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-16 05:47:19.003906 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-16 05:47:19.003916 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-16 05:47:19.003927 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-16 05:47:19.003937 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-16 05:47:19.003948 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-16 05:47:19.003958 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-16 05:47:19.003969 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-16 05:47:19.003980 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-16 05:47:19.003990 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-16 05:47:19.004011 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-16 05:47:37.589248 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-16 05:47:37.589366 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-16 05:47:37.589386 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-16 05:47:37.589398 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-16 05:47:37.589408 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-16 05:47:37.589419 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-16 05:47:37.589430 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-16 05:47:37.589442 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-16 05:47:37.589454 | orchestrator | 2026-02-16 05:47:37.589467 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-16 05:47:37.589483 | orchestrator | Monday 16 February 2026 05:47:18 +0000 (0:00:01.829) 0:01:17.976 ******* 2026-02-16 05:47:37.589495 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-16 05:47:37.589506 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-16 05:47:37.589518 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-16 05:47:37.589529 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:37.589540 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-16 05:47:37.589552 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-16 05:47:37.589563 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-16 05:47:37.589575 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:37.589583 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-16 05:47:37.589590 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-16 05:47:37.589597 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-16 05:47:37.589603 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:37.589610 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-16 05:47:37.589682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-16 05:47:37.589690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-16 05:47:37.589697 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:37.589704 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-16 05:47:37.589711 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-16 05:47:37.589717 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-16 05:47:37.589724 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:37.589731 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-16 05:47:37.589738 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-16 05:47:37.589744 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-16 05:47:37.589752 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:37.589760 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-16 05:47:37.589767 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-16 05:47:37.589775 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-16 05:47:37.589782 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:37.589789 | orchestrator | 2026-02-16 05:47:37.589797 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-16 05:47:37.589805 | orchestrator | Monday 16 February 2026 05:47:20 +0000 (0:00:01.044) 0:01:19.021 ******* 2026-02-16 05:47:37.589812 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:37.589820 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:37.589827 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:37.589834 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:37.589843 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 05:47:37.589851 | orchestrator | 2026-02-16 05:47:37.589858 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-16 05:47:37.589867 | orchestrator | Monday 16 February 2026 05:47:21 +0000 (0:00:01.100) 0:01:20.121 ******* 2026-02-16 05:47:37.589875 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:37.589882 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:37.589889 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:37.589898 | orchestrator | 2026-02-16 05:47:37.589909 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-16 05:47:37.589921 | orchestrator | Monday 16 February 2026 05:47:21 +0000 (0:00:00.583) 0:01:20.704 ******* 2026-02-16 05:47:37.589931 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:37.589942 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:37.589953 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:37.589963 | orchestrator | 2026-02-16 05:47:37.589974 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-16 05:47:37.589986 | orchestrator | Monday 16 February 2026 05:47:22 +0000 (0:00:00.325) 0:01:21.030 ******* 2026-02-16 05:47:37.589997 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:37.590009 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:37.590071 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:37.590079 | orchestrator | 2026-02-16 05:47:37.590085 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-16 05:47:37.590092 | orchestrator | Monday 16 February 2026 05:47:22 +0000 (0:00:00.368) 0:01:21.399 ******* 2026-02-16 05:47:37.590099 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:47:37.590105 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:47:37.590112 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:47:37.590135 | orchestrator | 2026-02-16 05:47:37.590142 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-16 05:47:37.590149 | orchestrator | Monday 16 February 2026 05:47:22 +0000 (0:00:00.410) 0:01:21.809 ******* 2026-02-16 05:47:37.590156 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 05:47:37.590190 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 05:47:37.590198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 05:47:37.590204 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:37.590211 | orchestrator | 2026-02-16 05:47:37.590217 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-16 05:47:37.590224 | orchestrator | Monday 16 February 2026 05:47:23 +0000 (0:00:00.362) 0:01:22.171 ******* 2026-02-16 05:47:37.590231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 05:47:37.590237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 05:47:37.590244 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 05:47:37.590250 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:37.590257 | orchestrator | 2026-02-16 05:47:37.590263 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-16 05:47:37.590270 | orchestrator | Monday 16 February 2026 05:47:23 +0000 (0:00:00.648) 0:01:22.820 ******* 2026-02-16 05:47:37.590276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 05:47:37.590283 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 05:47:37.590289 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 05:47:37.590296 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:37.590302 | orchestrator | 2026-02-16 05:47:37.590309 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-16 05:47:37.590315 | orchestrator | Monday 16 February 2026 05:47:24 +0000 (0:00:00.658) 0:01:23.479 ******* 2026-02-16 05:47:37.590322 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:47:37.590328 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:47:37.590335 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:47:37.590341 | orchestrator | 2026-02-16 05:47:37.590348 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-16 05:47:37.590355 | orchestrator | Monday 16 February 2026 05:47:25 +0000 (0:00:00.662) 0:01:24.141 ******* 2026-02-16 05:47:37.590361 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-16 05:47:37.590368 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-16 05:47:37.590374 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-16 05:47:37.590381 | orchestrator | 2026-02-16 05:47:37.590387 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-16 05:47:37.590394 | orchestrator | Monday 16 February 2026 05:47:25 +0000 (0:00:00.526) 0:01:24.668 ******* 2026-02-16 05:47:37.590400 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 05:47:37.590407 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 05:47:37.590452 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 05:47:37.590459 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-16 05:47:37.590466 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-16 05:47:37.590473 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-16 05:47:37.590479 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-16 05:47:37.590486 | orchestrator | 2026-02-16 05:47:37.590492 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-16 05:47:37.590499 | orchestrator | Monday 16 February 2026 05:47:26 +0000 (0:00:00.838) 0:01:25.506 ******* 2026-02-16 05:47:37.590505 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 05:47:37.590512 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 05:47:37.590518 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 05:47:37.590524 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-16 05:47:37.590536 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-16 05:47:37.590542 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-16 05:47:37.590549 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-16 05:47:37.590555 | orchestrator | 2026-02-16 05:47:37.590562 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-02-16 05:47:37.590568 | orchestrator | Monday 16 February 2026 05:47:28 +0000 (0:00:02.275) 0:01:27.781 ******* 2026-02-16 05:47:37.590575 | orchestrator | changed: [testbed-node-3] 2026-02-16 05:47:37.590581 | orchestrator | changed: [testbed-manager] 2026-02-16 05:47:37.590588 | orchestrator | changed: [testbed-node-5] 2026-02-16 05:47:37.590594 | orchestrator | changed: [testbed-node-4] 2026-02-16 05:47:37.590601 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:47:37.590607 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:47:37.590614 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:47:37.590620 | orchestrator | 2026-02-16 05:47:37.590627 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-02-16 05:47:37.590633 | orchestrator | Monday 16 February 2026 05:47:36 +0000 (0:00:07.436) 0:01:35.218 ******* 2026-02-16 05:47:37.590640 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:37.590646 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:37.590653 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:37.590659 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:37.590665 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:37.590672 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:37.590678 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:37.590685 | orchestrator | 2026-02-16 05:47:37.590691 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-02-16 05:47:37.590698 | orchestrator | Monday 16 February 2026 05:47:37 +0000 (0:00:00.950) 0:01:36.168 ******* 2026-02-16 05:47:37.590705 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:37.590711 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:37.590723 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:55.103370 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:55.103483 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:55.103498 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:55.103509 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:55.103521 | orchestrator | 2026-02-16 05:47:55.103534 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-02-16 05:47:55.103546 | orchestrator | Monday 16 February 2026 05:47:37 +0000 (0:00:00.726) 0:01:36.895 ******* 2026-02-16 05:47:55.103557 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:55.103568 | orchestrator | changed: [testbed-node-2] 2026-02-16 05:47:55.103579 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:47:55.103590 | orchestrator | changed: [testbed-node-1] 2026-02-16 05:47:55.103600 | orchestrator | changed: [testbed-node-3] 2026-02-16 05:47:55.103612 | orchestrator | changed: [testbed-node-4] 2026-02-16 05:47:55.103633 | orchestrator | changed: [testbed-node-5] 2026-02-16 05:47:55.103651 | orchestrator | 2026-02-16 05:47:55.103670 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-02-16 05:47:55.103687 | orchestrator | Monday 16 February 2026 05:47:40 +0000 (0:00:02.284) 0:01:39.180 ******* 2026-02-16 05:47:55.103707 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-16 05:47:55.103727 | orchestrator | 2026-02-16 05:47:55.103744 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-02-16 05:47:55.103762 | orchestrator | Monday 16 February 2026 05:47:42 +0000 (0:00:02.077) 0:01:41.258 ******* 2026-02-16 05:47:55.103781 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:55.103799 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:55.103855 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:55.103877 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:55.103896 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:55.103916 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:55.103937 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:55.103959 | orchestrator | 2026-02-16 05:47:55.103982 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-02-16 05:47:55.104003 | orchestrator | Monday 16 February 2026 05:47:43 +0000 (0:00:00.756) 0:01:42.014 ******* 2026-02-16 05:47:55.104026 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:55.104047 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:55.104066 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:55.104080 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:55.104092 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:55.104104 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:55.104146 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:55.104160 | orchestrator | 2026-02-16 05:47:55.104186 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-02-16 05:47:55.104197 | orchestrator | Monday 16 February 2026 05:47:43 +0000 (0:00:00.964) 0:01:42.979 ******* 2026-02-16 05:47:55.104286 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:55.104299 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:55.104310 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:55.104321 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:55.104336 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:55.104355 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:55.104381 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:55.104407 | orchestrator | 2026-02-16 05:47:55.104425 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-02-16 05:47:55.104445 | orchestrator | Monday 16 February 2026 05:47:44 +0000 (0:00:00.829) 0:01:43.809 ******* 2026-02-16 05:47:55.104463 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:55.104482 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:55.104499 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:55.104517 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:55.104586 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:55.104623 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:55.104654 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:55.104667 | orchestrator | 2026-02-16 05:47:55.104678 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-02-16 05:47:55.104689 | orchestrator | Monday 16 February 2026 05:47:45 +0000 (0:00:01.081) 0:01:44.890 ******* 2026-02-16 05:47:55.104700 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:55.104710 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:55.104721 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:55.104731 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:55.104742 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:55.104753 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:55.104763 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:55.104774 | orchestrator | 2026-02-16 05:47:55.104785 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-02-16 05:47:55.104796 | orchestrator | Monday 16 February 2026 05:47:46 +0000 (0:00:00.772) 0:01:45.663 ******* 2026-02-16 05:47:55.104807 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:55.104817 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:55.104828 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:55.104838 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:55.104849 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:55.104859 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:55.104870 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:55.104881 | orchestrator | 2026-02-16 05:47:55.104892 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-02-16 05:47:55.104918 | orchestrator | Monday 16 February 2026 05:47:47 +0000 (0:00:00.945) 0:01:46.608 ******* 2026-02-16 05:47:55.104929 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:55.104940 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:55.104950 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:55.104960 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:55.104971 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:55.104981 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:55.104992 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:55.105002 | orchestrator | 2026-02-16 05:47:55.105013 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-02-16 05:47:55.105045 | orchestrator | Monday 16 February 2026 05:47:48 +0000 (0:00:00.832) 0:01:47.440 ******* 2026-02-16 05:47:55.105057 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:55.105067 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:55.105078 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:55.105088 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:55.105099 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:55.105109 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:55.105179 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:55.105191 | orchestrator | 2026-02-16 05:47:55.105201 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-02-16 05:47:55.105212 | orchestrator | Monday 16 February 2026 05:47:49 +0000 (0:00:00.974) 0:01:48.415 ******* 2026-02-16 05:47:55.105223 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:55.105233 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:55.105244 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:55.105254 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:55.105265 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:55.105275 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:55.105286 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:55.105296 | orchestrator | 2026-02-16 05:47:55.105307 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-02-16 05:47:55.105318 | orchestrator | Monday 16 February 2026 05:47:50 +0000 (0:00:00.961) 0:01:49.377 ******* 2026-02-16 05:47:55.105328 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:55.105342 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:55.105363 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:55.105383 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:55.105403 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:55.105423 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:55.105444 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:55.105465 | orchestrator | 2026-02-16 05:47:55.105487 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-02-16 05:47:55.105507 | orchestrator | Monday 16 February 2026 05:47:51 +0000 (0:00:00.762) 0:01:50.139 ******* 2026-02-16 05:47:55.105526 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:55.105537 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:55.105548 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:55.105558 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:55.105571 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:55.105590 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:55.105608 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:55.105626 | orchestrator | 2026-02-16 05:47:55.105644 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-02-16 05:47:55.105662 | orchestrator | Monday 16 February 2026 05:47:52 +0000 (0:00:00.961) 0:01:51.101 ******* 2026-02-16 05:47:55.105680 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:55.105698 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:55.105728 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:55.105746 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:55.105766 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:55.105798 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:55.105818 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:55.105836 | orchestrator | 2026-02-16 05:47:55.105855 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-02-16 05:47:55.105870 | orchestrator | Monday 16 February 2026 05:47:52 +0000 (0:00:00.717) 0:01:51.819 ******* 2026-02-16 05:47:55.105881 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:55.105892 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:55.105902 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:55.105914 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 05:47:55.105926 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 05:47:55.105937 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:55.105948 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 05:47:55.105959 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 05:47:55.105969 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:55.105980 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 05:47:55.105991 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 05:47:55.106002 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:55.106013 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:55.106086 | orchestrator | 2026-02-16 05:47:55.106098 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-02-16 05:47:55.106109 | orchestrator | Monday 16 February 2026 05:47:53 +0000 (0:00:00.988) 0:01:52.807 ******* 2026-02-16 05:47:55.106172 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:55.106184 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:47:55.106195 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:47:55.106206 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:47:55.106216 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:47:55.106227 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:47:55.106238 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:47:55.106248 | orchestrator | 2026-02-16 05:47:55.106259 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-02-16 05:47:55.106270 | orchestrator | Monday 16 February 2026 05:47:54 +0000 (0:00:00.744) 0:01:53.552 ******* 2026-02-16 05:47:55.106281 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:47:55.106306 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:04.731814 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:04.731913 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:04.731925 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:04.731932 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:04.731939 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:04.731945 | orchestrator | 2026-02-16 05:48:04.731953 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-02-16 05:48:04.731961 | orchestrator | Monday 16 February 2026 05:47:55 +0000 (0:00:01.047) 0:01:54.599 ******* 2026-02-16 05:48:04.731967 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:04.731974 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:04.731981 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:04.731987 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:04.731994 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:04.732000 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:04.732029 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:04.732036 | orchestrator | 2026-02-16 05:48:04.732044 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-02-16 05:48:04.732051 | orchestrator | Monday 16 February 2026 05:47:56 +0000 (0:00:00.764) 0:01:55.364 ******* 2026-02-16 05:48:04.732057 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:04.732064 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:04.732070 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:04.732076 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:04.732083 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:04.732089 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:04.732096 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:04.732102 | orchestrator | 2026-02-16 05:48:04.732110 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-02-16 05:48:04.732161 | orchestrator | Monday 16 February 2026 05:47:57 +0000 (0:00:01.028) 0:01:56.393 ******* 2026-02-16 05:48:04.732167 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:04.732173 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:04.732180 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:04.732185 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:04.732191 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:04.732197 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:04.732204 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:04.732215 | orchestrator | 2026-02-16 05:48:04.732219 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-02-16 05:48:04.732223 | orchestrator | Monday 16 February 2026 05:47:58 +0000 (0:00:00.958) 0:01:57.351 ******* 2026-02-16 05:48:04.732227 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:04.732231 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:04.732234 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:04.732238 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:04.732242 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:04.732256 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:04.732261 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:04.732264 | orchestrator | 2026-02-16 05:48:04.732268 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-02-16 05:48:04.732272 | orchestrator | Monday 16 February 2026 05:47:59 +0000 (0:00:00.724) 0:01:58.075 ******* 2026-02-16 05:48:04.732276 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:04.732279 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:04.732283 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:04.732287 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:04.732292 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 05:48:04.732296 | orchestrator | 2026-02-16 05:48:04.732299 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-02-16 05:48:04.732304 | orchestrator | Monday 16 February 2026 05:48:00 +0000 (0:00:01.555) 0:01:59.631 ******* 2026-02-16 05:48:04.732310 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:48:04.732318 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:48:04.732327 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:48:04.732334 | orchestrator | 2026-02-16 05:48:04.732340 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-02-16 05:48:04.732346 | orchestrator | Monday 16 February 2026 05:48:01 +0000 (0:00:00.391) 0:02:00.022 ******* 2026-02-16 05:48:04.732354 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 05:48:04.732361 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 05:48:04.732367 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:04.732373 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 05:48:04.732387 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 05:48:04.732393 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:04.732399 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 05:48:04.732405 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 05:48:04.732411 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:04.732417 | orchestrator | 2026-02-16 05:48:04.732423 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-02-16 05:48:04.732429 | orchestrator | Monday 16 February 2026 05:48:01 +0000 (0:00:00.391) 0:02:00.414 ******* 2026-02-16 05:48:04.732455 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:04.732465 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:04.732472 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:04.732479 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:04.732485 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:04.732492 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:04.732498 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:04.732510 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:04.732517 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:04.732523 | orchestrator | 2026-02-16 05:48:04.732530 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-02-16 05:48:04.732536 | orchestrator | Monday 16 February 2026 05:48:02 +0000 (0:00:00.591) 0:02:01.006 ******* 2026-02-16 05:48:04.732543 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:04.732549 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:04.732556 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:04.732562 | orchestrator | 2026-02-16 05:48:04.732569 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-02-16 05:48:04.732575 | orchestrator | Monday 16 February 2026 05:48:02 +0000 (0:00:00.345) 0:02:01.351 ******* 2026-02-16 05:48:04.732587 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:04.732593 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:04.732600 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:04.732606 | orchestrator | 2026-02-16 05:48:04.732612 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-02-16 05:48:04.732619 | orchestrator | Monday 16 February 2026 05:48:02 +0000 (0:00:00.315) 0:02:01.666 ******* 2026-02-16 05:48:04.732625 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:04.732631 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:04.732638 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:04.732644 | orchestrator | 2026-02-16 05:48:04.732651 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-02-16 05:48:04.732657 | orchestrator | Monday 16 February 2026 05:48:02 +0000 (0:00:00.299) 0:02:01.966 ******* 2026-02-16 05:48:04.732663 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:04.732669 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:04.732676 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:04.732683 | orchestrator | 2026-02-16 05:48:04.732689 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-02-16 05:48:04.732696 | orchestrator | Monday 16 February 2026 05:48:03 +0000 (0:00:00.291) 0:02:02.257 ******* 2026-02-16 05:48:04.732703 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'}) 2026-02-16 05:48:04.732711 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'}) 2026-02-16 05:48:04.732717 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'}) 2026-02-16 05:48:04.732723 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'}) 2026-02-16 05:48:04.732736 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'}) 2026-02-16 05:48:05.121418 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'}) 2026-02-16 05:48:05.121523 | orchestrator | 2026-02-16 05:48:05.121555 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-02-16 05:48:05.121565 | orchestrator | Monday 16 February 2026 05:48:04 +0000 (0:00:01.444) 0:02:03.701 ******* 2026-02-16 05:48:05.121578 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e/osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1771213392.735571, 'mtime': 1771213392.7295709, 'ctime': 1771213392.7295709, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e/osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:05.121625 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74/osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1771213411.2998898, 'mtime': 1771213411.2948897, 'ctime': 1771213411.2948897, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74/osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:05.121635 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:05.121659 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d/osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 950, 'dev': 6, 'nlink': 1, 'atime': 1771213393.155403, 'mtime': 1771213393.1464028, 'ctime': 1771213393.1464028, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d/osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:05.121668 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca/osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 960, 'dev': 6, 'nlink': 1, 'atime': 1771213411.9697268, 'mtime': 1771213411.9627266, 'ctime': 1771213411.9627266, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca/osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:05.121682 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:05.121694 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5/osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1771213393.2782385, 'mtime': 1771213393.2732384, 'ctime': 1771213393.2732384, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5/osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:05.121709 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-f418f421-cc32-53ce-b421-39353fe37c02/osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1771213412.6825683, 'mtime': 1771213412.6795683, 'ctime': 1771213412.6795683, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-f418f421-cc32-53ce-b421-39353fe37c02/osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:09.333716 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:09.333836 | orchestrator | 2026-02-16 05:48:09.333854 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-02-16 05:48:09.333867 | orchestrator | Monday 16 February 2026 05:48:05 +0000 (0:00:00.389) 0:02:04.091 ******* 2026-02-16 05:48:09.333879 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 05:48:09.333891 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 05:48:09.333902 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:09.333911 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 05:48:09.333921 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 05:48:09.333930 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:09.333940 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 05:48:09.333985 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 05:48:09.333997 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:09.334006 | orchestrator | 2026-02-16 05:48:09.334045 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-02-16 05:48:09.334059 | orchestrator | Monday 16 February 2026 05:48:05 +0000 (0:00:00.371) 0:02:04.463 ******* 2026-02-16 05:48:09.334084 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:09.334097 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:09.334107 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:09.334151 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:09.334168 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:09.334183 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:09.334199 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:09.334215 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:09.334232 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:09.334248 | orchestrator | 2026-02-16 05:48:09.334264 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-02-16 05:48:09.334279 | orchestrator | Monday 16 February 2026 05:48:05 +0000 (0:00:00.348) 0:02:04.811 ******* 2026-02-16 05:48:09.334297 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 05:48:09.334319 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 05:48:09.334342 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:09.334386 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 05:48:09.334408 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 05:48:09.334456 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:09.334475 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 05:48:09.334490 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 05:48:09.334504 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:09.334520 | orchestrator | 2026-02-16 05:48:09.334624 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-02-16 05:48:09.334641 | orchestrator | Monday 16 February 2026 05:48:06 +0000 (0:00:00.626) 0:02:05.438 ******* 2026-02-16 05:48:09.334657 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:09.334673 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:09.334688 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:09.334715 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:09.334733 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:09.334748 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:09.334764 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:09.334779 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:09.334795 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:09.334811 | orchestrator | 2026-02-16 05:48:09.334827 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-02-16 05:48:09.334842 | orchestrator | Monday 16 February 2026 05:48:06 +0000 (0:00:00.397) 0:02:05.835 ******* 2026-02-16 05:48:09.334857 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:09.334872 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:09.334887 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:09.334901 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:09.334916 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:09.334930 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:09.334945 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:09.334960 | orchestrator | 2026-02-16 05:48:09.334975 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-02-16 05:48:09.334990 | orchestrator | Monday 16 February 2026 05:48:07 +0000 (0:00:00.764) 0:02:06.600 ******* 2026-02-16 05:48:09.335016 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:09.335031 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:09.335045 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:09.335060 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:09.335075 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 05:48:09.335091 | orchestrator | 2026-02-16 05:48:09.335106 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-02-16 05:48:09.335173 | orchestrator | Monday 16 February 2026 05:48:09 +0000 (0:00:01.598) 0:02:08.198 ******* 2026-02-16 05:48:09.335207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571477 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:13.571488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571497 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571515 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571533 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:13.571542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571601 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:13.571610 | orchestrator | 2026-02-16 05:48:13.571620 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-02-16 05:48:13.571630 | orchestrator | Monday 16 February 2026 05:48:09 +0000 (0:00:00.383) 0:02:08.582 ******* 2026-02-16 05:48:13.571639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571719 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571745 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:13.571753 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:13.571762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571821 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:13.571830 | orchestrator | 2026-02-16 05:48:13.571839 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-02-16 05:48:13.571847 | orchestrator | Monday 16 February 2026 05:48:10 +0000 (0:00:00.660) 0:02:09.242 ******* 2026-02-16 05:48:13.571856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571899 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571954 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:13.571963 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:13.571972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571980 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.571998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.572006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 05:48:13.572015 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:13.572023 | orchestrator | 2026-02-16 05:48:13.572032 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-02-16 05:48:13.572041 | orchestrator | Monday 16 February 2026 05:48:10 +0000 (0:00:00.444) 0:02:09.686 ******* 2026-02-16 05:48:13.572049 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:13.572058 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:13.572067 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:13.572075 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:13.572084 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:13.572092 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:13.572101 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:13.572144 | orchestrator | 2026-02-16 05:48:13.572155 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-02-16 05:48:13.572164 | orchestrator | Monday 16 February 2026 05:48:11 +0000 (0:00:00.773) 0:02:10.459 ******* 2026-02-16 05:48:13.572173 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:13.572181 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:13.572190 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:13.572198 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:13.572207 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:13.572215 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:13.572224 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:13.572232 | orchestrator | 2026-02-16 05:48:13.572241 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-02-16 05:48:13.572250 | orchestrator | Monday 16 February 2026 05:48:12 +0000 (0:00:00.956) 0:02:11.416 ******* 2026-02-16 05:48:13.572258 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:13.572267 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:13.572275 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:13.572284 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:13.572292 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:13.572301 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:13.572309 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:13.572318 | orchestrator | 2026-02-16 05:48:13.572327 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-02-16 05:48:13.572335 | orchestrator | Monday 16 February 2026 05:48:13 +0000 (0:00:00.932) 0:02:12.349 ******* 2026-02-16 05:48:13.572350 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:17.977766 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:17.977901 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:17.977925 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:17.977942 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:17.977958 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:17.977974 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:17.977991 | orchestrator | 2026-02-16 05:48:17.978011 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-02-16 05:48:17.978161 | orchestrator | Monday 16 February 2026 05:48:14 +0000 (0:00:00.744) 0:02:13.094 ******* 2026-02-16 05:48:17.978175 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:17.978185 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:17.978195 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:17.978204 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:17.978214 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:17.978223 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:17.978233 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:17.978242 | orchestrator | 2026-02-16 05:48:17.978252 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-02-16 05:48:17.978261 | orchestrator | Monday 16 February 2026 05:48:15 +0000 (0:00:01.071) 0:02:14.166 ******* 2026-02-16 05:48:17.978271 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:17.978282 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:17.978293 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:17.978304 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:17.978314 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:17.978325 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:17.978336 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:17.978347 | orchestrator | 2026-02-16 05:48:17.978359 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-02-16 05:48:17.978371 | orchestrator | Monday 16 February 2026 05:48:15 +0000 (0:00:00.762) 0:02:14.929 ******* 2026-02-16 05:48:17.978381 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:17.978393 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:17.978403 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:17.978414 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:17.978425 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:17.978436 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:17.978446 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:17.978457 | orchestrator | 2026-02-16 05:48:17.978468 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-02-16 05:48:17.978493 | orchestrator | Monday 16 February 2026 05:48:16 +0000 (0:00:01.012) 0:02:15.941 ******* 2026-02-16 05:48:17.978506 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 05:48:17.978518 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 05:48:17.978537 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 05:48:17.978555 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 05:48:17.978572 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 05:48:17.978590 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 05:48:17.978606 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:17.978622 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 05:48:17.978639 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 05:48:17.978668 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 05:48:17.978685 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 05:48:17.978702 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 05:48:17.978719 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 05:48:17.978760 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 05:48:17.978776 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 05:48:17.978800 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 05:48:17.978817 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 05:48:17.978833 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 05:48:17.978849 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 05:48:17.978864 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:17.978879 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:17.978895 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 05:48:17.978910 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 05:48:17.978926 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 05:48:17.978948 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 05:48:17.978958 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 05:48:17.978967 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 05:48:17.978976 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 05:48:17.978986 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 05:48:17.978995 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 05:48:17.979014 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 05:48:17.979024 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:17.979036 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 05:48:17.979052 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 05:48:17.979068 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 05:48:17.979084 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 05:48:17.979100 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 05:48:17.979195 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 05:48:17.979228 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 05:48:19.920349 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 05:48:19.920480 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:19.920502 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 05:48:19.920521 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 05:48:19.920539 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 05:48:19.920555 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 05:48:19.920572 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:19.920589 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 05:48:19.920605 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 05:48:19.920622 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:19.920637 | orchestrator | 2026-02-16 05:48:19.920675 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-02-16 05:48:19.920693 | orchestrator | Monday 16 February 2026 05:48:17 +0000 (0:00:01.005) 0:02:16.947 ******* 2026-02-16 05:48:19.920710 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:19.920726 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:19.920742 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:19.920758 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:19.920800 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:19.920817 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:19.920832 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:19.920848 | orchestrator | 2026-02-16 05:48:19.920865 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-02-16 05:48:19.920884 | orchestrator | Monday 16 February 2026 05:48:18 +0000 (0:00:00.994) 0:02:17.942 ******* 2026-02-16 05:48:19.920902 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 05:48:19.920921 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 05:48:19.920939 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 05:48:19.920958 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 05:48:19.920975 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 05:48:19.920992 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 05:48:19.921011 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:19.921028 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 05:48:19.921045 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 05:48:19.921062 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 05:48:19.921080 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 05:48:19.921152 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 05:48:19.921172 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 05:48:19.921189 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:19.921206 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 05:48:19.921223 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 05:48:19.921238 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 05:48:19.921253 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 05:48:19.921269 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 05:48:19.921301 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 05:48:19.921318 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:19.921344 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 05:48:19.921361 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 05:48:19.921377 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 05:48:19.921395 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 05:48:19.921411 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 05:48:19.921428 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 05:48:19.921442 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 05:48:19.921456 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 05:48:19.921470 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 05:48:19.921485 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 05:48:19.921501 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:19.921517 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 05:48:19.921533 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 05:48:19.921549 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 05:48:19.921566 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 05:48:19.921597 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 05:48:46.078273 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 05:48:46.078400 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 05:48:46.078447 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 05:48:46.078460 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 05:48:46.078473 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 05:48:46.078485 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:46.078496 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 05:48:46.078507 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:46.078518 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 05:48:46.078543 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 05:48:46.078555 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 05:48:46.078565 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:46.078577 | orchestrator | 2026-02-16 05:48:46.078589 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-02-16 05:48:46.078601 | orchestrator | Monday 16 February 2026 05:48:19 +0000 (0:00:00.950) 0:02:18.892 ******* 2026-02-16 05:48:46.078612 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:46.078623 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:46.078633 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:46.078644 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:46.078654 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:46.078665 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:46.078675 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:46.078686 | orchestrator | 2026-02-16 05:48:46.078697 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-02-16 05:48:46.078708 | orchestrator | Monday 16 February 2026 05:48:20 +0000 (0:00:01.022) 0:02:19.914 ******* 2026-02-16 05:48:46.078719 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:46.078729 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:46.078740 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:46.078750 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:46.078761 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:46.078771 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:46.078782 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:46.078792 | orchestrator | 2026-02-16 05:48:46.078803 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-02-16 05:48:46.078814 | orchestrator | Monday 16 February 2026 05:48:21 +0000 (0:00:00.970) 0:02:20.885 ******* 2026-02-16 05:48:46.078825 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:46.078836 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:46.078847 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:46.078857 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:46.078867 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:46.078878 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:46.078888 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:46.078899 | orchestrator | 2026-02-16 05:48:46.078910 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-16 05:48:46.078930 | orchestrator | Monday 16 February 2026 05:48:23 +0000 (0:00:01.506) 0:02:22.392 ******* 2026-02-16 05:48:46.078941 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-16 05:48:46.078954 | orchestrator | 2026-02-16 05:48:46.078964 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-02-16 05:48:46.078975 | orchestrator | Monday 16 February 2026 05:48:25 +0000 (0:00:01.956) 0:02:24.348 ******* 2026-02-16 05:48:46.078986 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-16 05:48:46.078997 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-16 05:48:46.079007 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-16 05:48:46.079018 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-16 05:48:46.079028 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-16 05:48:46.079057 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-16 05:48:46.079068 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-16 05:48:46.079079 | orchestrator | 2026-02-16 05:48:46.079090 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-02-16 05:48:46.079125 | orchestrator | Monday 16 February 2026 05:48:26 +0000 (0:00:00.937) 0:02:25.286 ******* 2026-02-16 05:48:46.079136 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:46.079147 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:46.079158 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:46.079168 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:46.079179 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:46.079190 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:46.079200 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:46.079211 | orchestrator | 2026-02-16 05:48:46.079222 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-02-16 05:48:46.079233 | orchestrator | Monday 16 February 2026 05:48:27 +0000 (0:00:01.033) 0:02:26.320 ******* 2026-02-16 05:48:46.079244 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:46.079255 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:46.079265 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:46.079276 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:46.079287 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:46.079297 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:46.079307 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:46.079326 | orchestrator | 2026-02-16 05:48:46.079346 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-02-16 05:48:46.079365 | orchestrator | Monday 16 February 2026 05:48:28 +0000 (0:00:00.811) 0:02:27.131 ******* 2026-02-16 05:48:46.079384 | orchestrator | ok: [testbed-node-1] 2026-02-16 05:48:46.079405 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:48:46.079424 | orchestrator | ok: [testbed-node-2] 2026-02-16 05:48:46.079444 | orchestrator | ok: [testbed-node-3] 2026-02-16 05:48:46.079473 | orchestrator | ok: [testbed-node-4] 2026-02-16 05:48:46.079494 | orchestrator | ok: [testbed-node-5] 2026-02-16 05:48:46.079507 | orchestrator | ok: [testbed-manager] 2026-02-16 05:48:46.079517 | orchestrator | 2026-02-16 05:48:46.079528 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-02-16 05:48:46.079538 | orchestrator | Monday 16 February 2026 05:48:29 +0000 (0:00:01.394) 0:02:28.526 ******* 2026-02-16 05:48:46.079549 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:46.079560 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:46.079570 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:46.079581 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:46.079600 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:46.079610 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:46.079621 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:46.079632 | orchestrator | 2026-02-16 05:48:46.079646 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-16 05:48:46.079657 | orchestrator | Monday 16 February 2026 05:48:31 +0000 (0:00:01.476) 0:02:30.003 ******* 2026-02-16 05:48:46.079668 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:46.079678 | orchestrator | skipping: [testbed-node-1] 2026-02-16 05:48:46.079689 | orchestrator | skipping: [testbed-node-2] 2026-02-16 05:48:46.079699 | orchestrator | skipping: [testbed-node-3] 2026-02-16 05:48:46.079709 | orchestrator | skipping: [testbed-node-4] 2026-02-16 05:48:46.079720 | orchestrator | skipping: [testbed-node-5] 2026-02-16 05:48:46.079730 | orchestrator | skipping: [testbed-manager] 2026-02-16 05:48:46.079741 | orchestrator | 2026-02-16 05:48:46.079752 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-02-16 05:48:46.079762 | orchestrator | Monday 16 February 2026 05:48:32 +0000 (0:00:01.540) 0:02:31.543 ******* 2026-02-16 05:48:46.079773 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:48:46.079784 | orchestrator | 2026-02-16 05:48:46.079794 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-02-16 05:48:46.079805 | orchestrator | Monday 16 February 2026 05:48:34 +0000 (0:00:01.805) 0:02:33.349 ******* 2026-02-16 05:48:46.079816 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:46.079826 | orchestrator | 2026-02-16 05:48:46.079837 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-02-16 05:48:46.079847 | orchestrator | 2026-02-16 05:48:46.079858 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-16 05:48:46.079869 | orchestrator | Monday 16 February 2026 05:48:35 +0000 (0:00:00.781) 0:02:34.130 ******* 2026-02-16 05:48:46.079879 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:48:46.079890 | orchestrator | 2026-02-16 05:48:46.079901 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-16 05:48:46.079911 | orchestrator | Monday 16 February 2026 05:48:35 +0000 (0:00:00.452) 0:02:34.582 ******* 2026-02-16 05:48:46.079922 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:48:46.079933 | orchestrator | 2026-02-16 05:48:46.079943 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-02-16 05:48:46.079954 | orchestrator | Monday 16 February 2026 05:48:36 +0000 (0:00:00.498) 0:02:35.081 ******* 2026-02-16 05:48:46.079967 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__914de6c9ac680119b10f48c8706e0b5d20c94ff3'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-16 05:48:46.079980 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__914de6c9ac680119b10f48c8706e0b5d20c94ff3'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-16 05:48:46.080001 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__914de6c9ac680119b10f48c8706e0b5d20c94ff3'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-16 05:48:54.116987 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__914de6c9ac680119b10f48c8706e0b5d20c94ff3'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-16 05:48:54.117154 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__914de6c9ac680119b10f48c8706e0b5d20c94ff3'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-16 05:48:54.117179 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__914de6c9ac680119b10f48c8706e0b5d20c94ff3'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__914de6c9ac680119b10f48c8706e0b5d20c94ff3'}])  2026-02-16 05:48:54.117189 | orchestrator | 2026-02-16 05:48:54.117198 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-16 05:48:54.117206 | orchestrator | 2026-02-16 05:48:54.117212 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-16 05:48:54.117219 | orchestrator | Monday 16 February 2026 05:48:46 +0000 (0:00:09.956) 0:02:45.037 ******* 2026-02-16 05:48:54.117226 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:48:54.117233 | orchestrator | 2026-02-16 05:48:54.117240 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-16 05:48:54.117246 | orchestrator | Monday 16 February 2026 05:48:46 +0000 (0:00:00.478) 0:02:45.515 ******* 2026-02-16 05:48:54.117253 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:48:54.117259 | orchestrator | 2026-02-16 05:48:54.117266 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-16 05:48:54.117272 | orchestrator | Monday 16 February 2026 05:48:46 +0000 (0:00:00.142) 0:02:45.658 ******* 2026-02-16 05:48:54.117279 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:54.117286 | orchestrator | 2026-02-16 05:48:54.117293 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-16 05:48:54.117300 | orchestrator | Monday 16 February 2026 05:48:46 +0000 (0:00:00.144) 0:02:45.802 ******* 2026-02-16 05:48:54.117306 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:48:54.117313 | orchestrator | 2026-02-16 05:48:54.117320 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-16 05:48:54.117327 | orchestrator | Monday 16 February 2026 05:48:46 +0000 (0:00:00.145) 0:02:45.948 ******* 2026-02-16 05:48:54.117333 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-16 05:48:54.117340 | orchestrator | 2026-02-16 05:48:54.117346 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-16 05:48:54.117353 | orchestrator | Monday 16 February 2026 05:48:47 +0000 (0:00:00.241) 0:02:46.189 ******* 2026-02-16 05:48:54.117359 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:48:54.117366 | orchestrator | 2026-02-16 05:48:54.117372 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-16 05:48:54.117379 | orchestrator | Monday 16 February 2026 05:48:47 +0000 (0:00:00.473) 0:02:46.663 ******* 2026-02-16 05:48:54.117386 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:48:54.117392 | orchestrator | 2026-02-16 05:48:54.117399 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-16 05:48:54.117405 | orchestrator | Monday 16 February 2026 05:48:47 +0000 (0:00:00.129) 0:02:46.792 ******* 2026-02-16 05:48:54.117412 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:48:54.117418 | orchestrator | 2026-02-16 05:48:54.117425 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-16 05:48:54.117432 | orchestrator | Monday 16 February 2026 05:48:48 +0000 (0:00:00.695) 0:02:47.488 ******* 2026-02-16 05:48:54.117438 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:48:54.117445 | orchestrator | 2026-02-16 05:48:54.117451 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-16 05:48:54.117464 | orchestrator | Monday 16 February 2026 05:48:48 +0000 (0:00:00.140) 0:02:47.629 ******* 2026-02-16 05:48:54.117470 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:48:54.117477 | orchestrator | 2026-02-16 05:48:54.117483 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-16 05:48:54.117490 | orchestrator | Monday 16 February 2026 05:48:48 +0000 (0:00:00.164) 0:02:47.794 ******* 2026-02-16 05:48:54.117497 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:48:54.117503 | orchestrator | 2026-02-16 05:48:54.117510 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-16 05:48:54.117517 | orchestrator | Monday 16 February 2026 05:48:48 +0000 (0:00:00.149) 0:02:47.943 ******* 2026-02-16 05:48:54.117523 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:54.117530 | orchestrator | 2026-02-16 05:48:54.117538 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-16 05:48:54.117546 | orchestrator | Monday 16 February 2026 05:48:49 +0000 (0:00:00.167) 0:02:48.111 ******* 2026-02-16 05:48:54.117553 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:48:54.117560 | orchestrator | 2026-02-16 05:48:54.117568 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-16 05:48:54.117575 | orchestrator | Monday 16 February 2026 05:48:49 +0000 (0:00:00.136) 0:02:48.248 ******* 2026-02-16 05:48:54.117583 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 05:48:54.117603 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 05:48:54.117611 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 05:48:54.117619 | orchestrator | 2026-02-16 05:48:54.117627 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-16 05:48:54.117634 | orchestrator | Monday 16 February 2026 05:48:49 +0000 (0:00:00.637) 0:02:48.885 ******* 2026-02-16 05:48:54.117642 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:48:54.117650 | orchestrator | 2026-02-16 05:48:54.117658 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-16 05:48:54.117665 | orchestrator | Monday 16 February 2026 05:48:50 +0000 (0:00:00.254) 0:02:49.140 ******* 2026-02-16 05:48:54.117673 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 05:48:54.117681 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 05:48:54.117688 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 05:48:54.117696 | orchestrator | 2026-02-16 05:48:54.117704 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-16 05:48:54.117711 | orchestrator | Monday 16 February 2026 05:48:52 +0000 (0:00:02.313) 0:02:51.453 ******* 2026-02-16 05:48:54.117723 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-16 05:48:54.117731 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-16 05:48:54.117739 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-16 05:48:54.117746 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:54.117754 | orchestrator | 2026-02-16 05:48:54.117763 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-16 05:48:54.117770 | orchestrator | Monday 16 February 2026 05:48:52 +0000 (0:00:00.432) 0:02:51.886 ******* 2026-02-16 05:48:54.117779 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-16 05:48:54.117789 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-16 05:48:54.117797 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-16 05:48:54.117811 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:54.117819 | orchestrator | 2026-02-16 05:48:54.117826 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-16 05:48:54.117835 | orchestrator | Monday 16 February 2026 05:48:53 +0000 (0:00:00.853) 0:02:52.740 ******* 2026-02-16 05:48:54.117848 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:54.117862 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:54.117873 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:54.117884 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:54.117896 | orchestrator | 2026-02-16 05:48:54.117907 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-16 05:48:54.117918 | orchestrator | Monday 16 February 2026 05:48:53 +0000 (0:00:00.162) 0:02:52.903 ******* 2026-02-16 05:48:54.117938 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'c4764146f42e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-16 05:48:50.776018', 'end': '2026-02-16 05:48:50.837412', 'delta': '0:00:00.061394', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c4764146f42e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-16 05:48:58.368512 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '8a5d26661ef8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-16 05:48:51.406956', 'end': '2026-02-16 05:48:51.469053', 'delta': '0:00:00.062097', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8a5d26661ef8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-16 05:48:58.368610 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '6720fcec1b21', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-16 05:48:52.274344', 'end': '2026-02-16 05:48:52.317355', 'delta': '0:00:00.043011', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6720fcec1b21'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-16 05:48:58.368645 | orchestrator | 2026-02-16 05:48:58.368659 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-16 05:48:58.368670 | orchestrator | Monday 16 February 2026 05:48:54 +0000 (0:00:00.188) 0:02:53.091 ******* 2026-02-16 05:48:58.368680 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:48:58.368691 | orchestrator | 2026-02-16 05:48:58.368702 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-16 05:48:58.368712 | orchestrator | Monday 16 February 2026 05:48:54 +0000 (0:00:00.257) 0:02:53.349 ******* 2026-02-16 05:48:58.368722 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:58.368732 | orchestrator | 2026-02-16 05:48:58.368742 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-16 05:48:58.368752 | orchestrator | Monday 16 February 2026 05:48:55 +0000 (0:00:00.819) 0:02:54.169 ******* 2026-02-16 05:48:58.368761 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:48:58.368771 | orchestrator | 2026-02-16 05:48:58.368780 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-16 05:48:58.368790 | orchestrator | Monday 16 February 2026 05:48:55 +0000 (0:00:00.161) 0:02:54.330 ******* 2026-02-16 05:48:58.368800 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-16 05:48:58.368809 | orchestrator | 2026-02-16 05:48:58.368819 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-16 05:48:58.368828 | orchestrator | Monday 16 February 2026 05:48:56 +0000 (0:00:01.113) 0:02:55.444 ******* 2026-02-16 05:48:58.368838 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:48:58.368848 | orchestrator | 2026-02-16 05:48:58.368857 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-16 05:48:58.368867 | orchestrator | Monday 16 February 2026 05:48:56 +0000 (0:00:00.148) 0:02:55.592 ******* 2026-02-16 05:48:58.368876 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:58.368886 | orchestrator | 2026-02-16 05:48:58.368896 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-16 05:48:58.368905 | orchestrator | Monday 16 February 2026 05:48:56 +0000 (0:00:00.122) 0:02:55.714 ******* 2026-02-16 05:48:58.368916 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:58.368934 | orchestrator | 2026-02-16 05:48:58.368951 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-16 05:48:58.368967 | orchestrator | Monday 16 February 2026 05:48:56 +0000 (0:00:00.223) 0:02:55.938 ******* 2026-02-16 05:48:58.368982 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:58.368997 | orchestrator | 2026-02-16 05:48:58.369015 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-16 05:48:58.369031 | orchestrator | Monday 16 February 2026 05:48:57 +0000 (0:00:00.135) 0:02:56.073 ******* 2026-02-16 05:48:58.369051 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:58.369070 | orchestrator | 2026-02-16 05:48:58.369126 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-16 05:48:58.369147 | orchestrator | Monday 16 February 2026 05:48:57 +0000 (0:00:00.133) 0:02:56.206 ******* 2026-02-16 05:48:58.369159 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:58.369170 | orchestrator | 2026-02-16 05:48:58.369181 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-16 05:48:58.369192 | orchestrator | Monday 16 February 2026 05:48:57 +0000 (0:00:00.134) 0:02:56.341 ******* 2026-02-16 05:48:58.369202 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:58.369212 | orchestrator | 2026-02-16 05:48:58.369224 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-16 05:48:58.369234 | orchestrator | Monday 16 February 2026 05:48:57 +0000 (0:00:00.134) 0:02:56.475 ******* 2026-02-16 05:48:58.369254 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:58.369266 | orchestrator | 2026-02-16 05:48:58.369276 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-16 05:48:58.369303 | orchestrator | Monday 16 February 2026 05:48:57 +0000 (0:00:00.128) 0:02:56.603 ******* 2026-02-16 05:48:58.369315 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:58.369326 | orchestrator | 2026-02-16 05:48:58.369337 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-16 05:48:58.369347 | orchestrator | Monday 16 February 2026 05:48:57 +0000 (0:00:00.132) 0:02:56.735 ******* 2026-02-16 05:48:58.369356 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:58.369366 | orchestrator | 2026-02-16 05:48:58.369375 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-16 05:48:58.369385 | orchestrator | Monday 16 February 2026 05:48:57 +0000 (0:00:00.119) 0:02:56.854 ******* 2026-02-16 05:48:58.369403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:48:58.369417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:48:58.369427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:48:58.369439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-26-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-16 05:48:58.369450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:48:58.369460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:48:58.369469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:48:58.369504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2335e156', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part16', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part14', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part15', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part1', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 05:48:58.581848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:48:58.581969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 05:48:58.581998 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:48:58.582131 | orchestrator | 2026-02-16 05:48:58.582211 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-16 05:48:58.582233 | orchestrator | Monday 16 February 2026 05:48:58 +0000 (0:00:00.479) 0:02:57.334 ******* 2026-02-16 05:48:58.582259 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:58.582313 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:58.582335 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:58.582374 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-26-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:58.582421 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:58.582441 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:58.582462 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:58.582512 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2335e156', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part16', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part14', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part15', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part1', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:48:58.582550 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:49:28.368722 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 05:49:28.368821 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:28.368834 | orchestrator | 2026-02-16 05:49:28.368849 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-16 05:49:28.368865 | orchestrator | Monday 16 February 2026 05:48:58 +0000 (0:00:00.217) 0:02:57.551 ******* 2026-02-16 05:49:28.368899 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:49:28.368913 | orchestrator | 2026-02-16 05:49:28.368926 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-16 05:49:28.368940 | orchestrator | Monday 16 February 2026 05:48:59 +0000 (0:00:00.522) 0:02:58.073 ******* 2026-02-16 05:49:28.368952 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:49:28.368964 | orchestrator | 2026-02-16 05:49:28.368976 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-16 05:49:28.368989 | orchestrator | Monday 16 February 2026 05:48:59 +0000 (0:00:00.144) 0:02:58.218 ******* 2026-02-16 05:49:28.369002 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:49:28.369015 | orchestrator | 2026-02-16 05:49:28.369028 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-16 05:49:28.369041 | orchestrator | Monday 16 February 2026 05:48:59 +0000 (0:00:00.489) 0:02:58.707 ******* 2026-02-16 05:49:28.369054 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:28.369068 | orchestrator | 2026-02-16 05:49:28.369127 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-16 05:49:28.369141 | orchestrator | Monday 16 February 2026 05:48:59 +0000 (0:00:00.132) 0:02:58.840 ******* 2026-02-16 05:49:28.369153 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:28.369164 | orchestrator | 2026-02-16 05:49:28.369175 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-16 05:49:28.369187 | orchestrator | Monday 16 February 2026 05:49:00 +0000 (0:00:00.247) 0:02:59.088 ******* 2026-02-16 05:49:28.369198 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:28.369210 | orchestrator | 2026-02-16 05:49:28.369223 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-16 05:49:28.369236 | orchestrator | Monday 16 February 2026 05:49:00 +0000 (0:00:00.156) 0:02:59.245 ******* 2026-02-16 05:49:28.369248 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 05:49:28.369261 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-16 05:49:28.369269 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-16 05:49:28.369276 | orchestrator | 2026-02-16 05:49:28.369283 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-16 05:49:28.369290 | orchestrator | Monday 16 February 2026 05:49:01 +0000 (0:00:00.953) 0:03:00.198 ******* 2026-02-16 05:49:28.369298 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-16 05:49:28.369305 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-16 05:49:28.369312 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-16 05:49:28.369319 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:28.369326 | orchestrator | 2026-02-16 05:49:28.369333 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-16 05:49:28.369340 | orchestrator | Monday 16 February 2026 05:49:01 +0000 (0:00:00.162) 0:03:00.361 ******* 2026-02-16 05:49:28.369347 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:28.369354 | orchestrator | 2026-02-16 05:49:28.369374 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-16 05:49:28.369382 | orchestrator | Monday 16 February 2026 05:49:01 +0000 (0:00:00.137) 0:03:00.499 ******* 2026-02-16 05:49:28.369389 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 05:49:28.369396 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 05:49:28.369404 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 05:49:28.369411 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-16 05:49:28.369418 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-16 05:49:28.369425 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-16 05:49:28.369432 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-16 05:49:28.369448 | orchestrator | 2026-02-16 05:49:28.369455 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-16 05:49:28.369462 | orchestrator | Monday 16 February 2026 05:49:02 +0000 (0:00:01.016) 0:03:01.515 ******* 2026-02-16 05:49:28.369469 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 05:49:28.369476 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 05:49:28.369483 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 05:49:28.369491 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-16 05:49:28.369515 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-16 05:49:28.369523 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-16 05:49:28.369530 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-16 05:49:28.369537 | orchestrator | 2026-02-16 05:49:28.369544 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-16 05:49:28.369551 | orchestrator | Monday 16 February 2026 05:49:04 +0000 (0:00:01.770) 0:03:03.286 ******* 2026-02-16 05:49:28.369558 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-16 05:49:28.369565 | orchestrator | 2026-02-16 05:49:28.369572 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-16 05:49:28.369616 | orchestrator | Monday 16 February 2026 05:49:05 +0000 (0:00:01.264) 0:03:04.550 ******* 2026-02-16 05:49:28.369625 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:28.369632 | orchestrator | 2026-02-16 05:49:28.369639 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-16 05:49:28.369646 | orchestrator | Monday 16 February 2026 05:49:05 +0000 (0:00:00.214) 0:03:04.764 ******* 2026-02-16 05:49:28.369653 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:28.369661 | orchestrator | 2026-02-16 05:49:28.369667 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-16 05:49:28.369674 | orchestrator | Monday 16 February 2026 05:49:05 +0000 (0:00:00.143) 0:03:04.908 ******* 2026-02-16 05:49:28.369682 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-16 05:49:28.369688 | orchestrator | 2026-02-16 05:49:28.369696 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-16 05:49:28.369724 | orchestrator | Monday 16 February 2026 05:49:07 +0000 (0:00:01.274) 0:03:06.183 ******* 2026-02-16 05:49:28.369739 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:28.369747 | orchestrator | 2026-02-16 05:49:28.369754 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-16 05:49:28.369761 | orchestrator | Monday 16 February 2026 05:49:07 +0000 (0:00:00.147) 0:03:06.330 ******* 2026-02-16 05:49:28.369768 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 05:49:28.369775 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 05:49:28.369782 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 05:49:28.369789 | orchestrator | 2026-02-16 05:49:28.369796 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-16 05:49:28.369803 | orchestrator | Monday 16 February 2026 05:49:08 +0000 (0:00:01.551) 0:03:07.882 ******* 2026-02-16 05:49:28.369811 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-02-16 05:49:28.369818 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-02-16 05:49:28.369827 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-02-16 05:49:28.369834 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-02-16 05:49:28.369841 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-02-16 05:49:28.369854 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-02-16 05:49:28.369862 | orchestrator | 2026-02-16 05:49:28.369869 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-16 05:49:28.369876 | orchestrator | Monday 16 February 2026 05:49:21 +0000 (0:00:12.793) 0:03:20.676 ******* 2026-02-16 05:49:28.369883 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 05:49:28.369890 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 05:49:28.369897 | orchestrator | 2026-02-16 05:49:28.369905 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-16 05:49:28.369916 | orchestrator | Monday 16 February 2026 05:49:24 +0000 (0:00:02.895) 0:03:23.571 ******* 2026-02-16 05:49:28.369923 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:49:28.369931 | orchestrator | 2026-02-16 05:49:28.369938 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-16 05:49:28.369945 | orchestrator | Monday 16 February 2026 05:49:26 +0000 (0:00:01.531) 0:03:25.103 ******* 2026-02-16 05:49:28.369952 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-16 05:49:28.369959 | orchestrator | 2026-02-16 05:49:28.369966 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-16 05:49:28.369973 | orchestrator | Monday 16 February 2026 05:49:26 +0000 (0:00:00.560) 0:03:25.663 ******* 2026-02-16 05:49:28.369980 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-16 05:49:28.369987 | orchestrator | 2026-02-16 05:49:28.369994 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-16 05:49:28.370002 | orchestrator | Monday 16 February 2026 05:49:27 +0000 (0:00:00.821) 0:03:26.485 ******* 2026-02-16 05:49:28.370009 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:49:28.370063 | orchestrator | 2026-02-16 05:49:28.370072 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-16 05:49:28.370103 | orchestrator | Monday 16 February 2026 05:49:28 +0000 (0:00:00.572) 0:03:27.057 ******* 2026-02-16 05:49:28.370115 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:28.370128 | orchestrator | 2026-02-16 05:49:28.370140 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-16 05:49:28.370152 | orchestrator | Monday 16 February 2026 05:49:28 +0000 (0:00:00.143) 0:03:27.201 ******* 2026-02-16 05:49:28.370162 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:28.370169 | orchestrator | 2026-02-16 05:49:28.370184 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-16 05:49:41.206777 | orchestrator | Monday 16 February 2026 05:49:28 +0000 (0:00:00.135) 0:03:27.337 ******* 2026-02-16 05:49:41.206887 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.206902 | orchestrator | 2026-02-16 05:49:41.206914 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-16 05:49:41.206924 | orchestrator | Monday 16 February 2026 05:49:28 +0000 (0:00:00.163) 0:03:27.501 ******* 2026-02-16 05:49:41.206934 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:49:41.206945 | orchestrator | 2026-02-16 05:49:41.206955 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-16 05:49:41.206965 | orchestrator | Monday 16 February 2026 05:49:29 +0000 (0:00:00.607) 0:03:28.108 ******* 2026-02-16 05:49:41.206974 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.206984 | orchestrator | 2026-02-16 05:49:41.206994 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-16 05:49:41.207004 | orchestrator | Monday 16 February 2026 05:49:29 +0000 (0:00:00.142) 0:03:28.251 ******* 2026-02-16 05:49:41.207016 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.207034 | orchestrator | 2026-02-16 05:49:41.207053 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-16 05:49:41.207184 | orchestrator | Monday 16 February 2026 05:49:29 +0000 (0:00:00.137) 0:03:28.389 ******* 2026-02-16 05:49:41.207203 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:49:41.207219 | orchestrator | 2026-02-16 05:49:41.207229 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-16 05:49:41.207239 | orchestrator | Monday 16 February 2026 05:49:30 +0000 (0:00:00.625) 0:03:29.014 ******* 2026-02-16 05:49:41.207248 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:49:41.207258 | orchestrator | 2026-02-16 05:49:41.207267 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-16 05:49:41.207277 | orchestrator | Monday 16 February 2026 05:49:30 +0000 (0:00:00.599) 0:03:29.614 ******* 2026-02-16 05:49:41.207286 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.207296 | orchestrator | 2026-02-16 05:49:41.207305 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-16 05:49:41.207315 | orchestrator | Monday 16 February 2026 05:49:30 +0000 (0:00:00.123) 0:03:29.737 ******* 2026-02-16 05:49:41.207324 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:49:41.207333 | orchestrator | 2026-02-16 05:49:41.207343 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-16 05:49:41.207352 | orchestrator | Monday 16 February 2026 05:49:30 +0000 (0:00:00.155) 0:03:29.892 ******* 2026-02-16 05:49:41.207362 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.207371 | orchestrator | 2026-02-16 05:49:41.207381 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-16 05:49:41.207390 | orchestrator | Monday 16 February 2026 05:49:31 +0000 (0:00:00.166) 0:03:30.059 ******* 2026-02-16 05:49:41.207400 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.207409 | orchestrator | 2026-02-16 05:49:41.207418 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-16 05:49:41.207428 | orchestrator | Monday 16 February 2026 05:49:31 +0000 (0:00:00.420) 0:03:30.479 ******* 2026-02-16 05:49:41.207438 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.207447 | orchestrator | 2026-02-16 05:49:41.207456 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-16 05:49:41.207466 | orchestrator | Monday 16 February 2026 05:49:31 +0000 (0:00:00.130) 0:03:30.610 ******* 2026-02-16 05:49:41.207475 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.207486 | orchestrator | 2026-02-16 05:49:41.207495 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-16 05:49:41.207505 | orchestrator | Monday 16 February 2026 05:49:31 +0000 (0:00:00.141) 0:03:30.752 ******* 2026-02-16 05:49:41.207514 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.207524 | orchestrator | 2026-02-16 05:49:41.207533 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-16 05:49:41.207542 | orchestrator | Monday 16 February 2026 05:49:31 +0000 (0:00:00.125) 0:03:30.877 ******* 2026-02-16 05:49:41.207552 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:49:41.207561 | orchestrator | 2026-02-16 05:49:41.207571 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-16 05:49:41.207595 | orchestrator | Monday 16 February 2026 05:49:32 +0000 (0:00:00.170) 0:03:31.047 ******* 2026-02-16 05:49:41.207605 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:49:41.207614 | orchestrator | 2026-02-16 05:49:41.207624 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-16 05:49:41.207633 | orchestrator | Monday 16 February 2026 05:49:32 +0000 (0:00:00.141) 0:03:31.189 ******* 2026-02-16 05:49:41.207643 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:49:41.207652 | orchestrator | 2026-02-16 05:49:41.207662 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-16 05:49:41.207671 | orchestrator | Monday 16 February 2026 05:49:32 +0000 (0:00:00.213) 0:03:31.402 ******* 2026-02-16 05:49:41.207680 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.207690 | orchestrator | 2026-02-16 05:49:41.207699 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-16 05:49:41.207709 | orchestrator | Monday 16 February 2026 05:49:32 +0000 (0:00:00.133) 0:03:31.536 ******* 2026-02-16 05:49:41.207725 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.207735 | orchestrator | 2026-02-16 05:49:41.207745 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-16 05:49:41.207754 | orchestrator | Monday 16 February 2026 05:49:32 +0000 (0:00:00.136) 0:03:31.673 ******* 2026-02-16 05:49:41.207764 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.207776 | orchestrator | 2026-02-16 05:49:41.207792 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-16 05:49:41.207814 | orchestrator | Monday 16 February 2026 05:49:32 +0000 (0:00:00.143) 0:03:31.816 ******* 2026-02-16 05:49:41.207833 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.207848 | orchestrator | 2026-02-16 05:49:41.207863 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-16 05:49:41.207878 | orchestrator | Monday 16 February 2026 05:49:32 +0000 (0:00:00.132) 0:03:31.949 ******* 2026-02-16 05:49:41.207918 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.207934 | orchestrator | 2026-02-16 05:49:41.207950 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-16 05:49:41.207966 | orchestrator | Monday 16 February 2026 05:49:33 +0000 (0:00:00.131) 0:03:32.080 ******* 2026-02-16 05:49:41.207984 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.208001 | orchestrator | 2026-02-16 05:49:41.208017 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-16 05:49:41.208034 | orchestrator | Monday 16 February 2026 05:49:33 +0000 (0:00:00.128) 0:03:32.209 ******* 2026-02-16 05:49:41.208045 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.208054 | orchestrator | 2026-02-16 05:49:41.208083 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-16 05:49:41.208095 | orchestrator | Monday 16 February 2026 05:49:33 +0000 (0:00:00.364) 0:03:32.573 ******* 2026-02-16 05:49:41.208104 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.208114 | orchestrator | 2026-02-16 05:49:41.208123 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-16 05:49:41.208132 | orchestrator | Monday 16 February 2026 05:49:33 +0000 (0:00:00.137) 0:03:32.711 ******* 2026-02-16 05:49:41.208142 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.208151 | orchestrator | 2026-02-16 05:49:41.208161 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-16 05:49:41.208170 | orchestrator | Monday 16 February 2026 05:49:33 +0000 (0:00:00.139) 0:03:32.850 ******* 2026-02-16 05:49:41.208180 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.208189 | orchestrator | 2026-02-16 05:49:41.208199 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-16 05:49:41.208208 | orchestrator | Monday 16 February 2026 05:49:34 +0000 (0:00:00.133) 0:03:32.983 ******* 2026-02-16 05:49:41.208217 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.208227 | orchestrator | 2026-02-16 05:49:41.208236 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-16 05:49:41.208245 | orchestrator | Monday 16 February 2026 05:49:34 +0000 (0:00:00.123) 0:03:33.107 ******* 2026-02-16 05:49:41.208255 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.208264 | orchestrator | 2026-02-16 05:49:41.208273 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-16 05:49:41.208283 | orchestrator | Monday 16 February 2026 05:49:34 +0000 (0:00:00.199) 0:03:33.306 ******* 2026-02-16 05:49:41.208292 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:49:41.208301 | orchestrator | 2026-02-16 05:49:41.208311 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-16 05:49:41.208320 | orchestrator | Monday 16 February 2026 05:49:35 +0000 (0:00:00.996) 0:03:34.303 ******* 2026-02-16 05:49:41.208329 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:49:41.208339 | orchestrator | 2026-02-16 05:49:41.208348 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-16 05:49:41.208368 | orchestrator | Monday 16 February 2026 05:49:36 +0000 (0:00:01.422) 0:03:35.725 ******* 2026-02-16 05:49:41.208378 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-16 05:49:41.208388 | orchestrator | 2026-02-16 05:49:41.208398 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-16 05:49:41.208407 | orchestrator | Monday 16 February 2026 05:49:37 +0000 (0:00:00.601) 0:03:36.327 ******* 2026-02-16 05:49:41.208417 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.208426 | orchestrator | 2026-02-16 05:49:41.208435 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-16 05:49:41.208445 | orchestrator | Monday 16 February 2026 05:49:37 +0000 (0:00:00.139) 0:03:36.466 ******* 2026-02-16 05:49:41.208454 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.208464 | orchestrator | 2026-02-16 05:49:41.208473 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-16 05:49:41.208482 | orchestrator | Monday 16 February 2026 05:49:37 +0000 (0:00:00.138) 0:03:36.605 ******* 2026-02-16 05:49:41.208492 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-16 05:49:41.208508 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-16 05:49:41.208518 | orchestrator | 2026-02-16 05:49:41.208528 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-16 05:49:41.208537 | orchestrator | Monday 16 February 2026 05:49:38 +0000 (0:00:01.112) 0:03:37.718 ******* 2026-02-16 05:49:41.208546 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:49:41.208556 | orchestrator | 2026-02-16 05:49:41.208565 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-16 05:49:41.208575 | orchestrator | Monday 16 February 2026 05:49:39 +0000 (0:00:00.662) 0:03:38.380 ******* 2026-02-16 05:49:41.208584 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.208593 | orchestrator | 2026-02-16 05:49:41.208603 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-16 05:49:41.208612 | orchestrator | Monday 16 February 2026 05:49:39 +0000 (0:00:00.157) 0:03:38.537 ******* 2026-02-16 05:49:41.208621 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.208631 | orchestrator | 2026-02-16 05:49:41.208640 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-16 05:49:41.208649 | orchestrator | Monday 16 February 2026 05:49:39 +0000 (0:00:00.126) 0:03:38.663 ******* 2026-02-16 05:49:41.208659 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:41.208668 | orchestrator | 2026-02-16 05:49:41.208678 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-16 05:49:41.208687 | orchestrator | Monday 16 February 2026 05:49:39 +0000 (0:00:00.136) 0:03:38.800 ******* 2026-02-16 05:49:41.208696 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-16 05:49:41.208706 | orchestrator | 2026-02-16 05:49:41.208716 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-16 05:49:41.208735 | orchestrator | Monday 16 February 2026 05:49:40 +0000 (0:00:00.613) 0:03:39.413 ******* 2026-02-16 05:49:41.208753 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:49:41.208772 | orchestrator | 2026-02-16 05:49:41.208801 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-16 05:49:55.425469 | orchestrator | Monday 16 February 2026 05:49:41 +0000 (0:00:00.759) 0:03:40.172 ******* 2026-02-16 05:49:55.425585 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-16 05:49:55.425601 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-16 05:49:55.425613 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-16 05:49:55.425624 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.425637 | orchestrator | 2026-02-16 05:49:55.425650 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-16 05:49:55.425685 | orchestrator | Monday 16 February 2026 05:49:41 +0000 (0:00:00.158) 0:03:40.331 ******* 2026-02-16 05:49:55.425697 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.425708 | orchestrator | 2026-02-16 05:49:55.425719 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-16 05:49:55.425729 | orchestrator | Monday 16 February 2026 05:49:41 +0000 (0:00:00.138) 0:03:40.469 ******* 2026-02-16 05:49:55.425740 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.425751 | orchestrator | 2026-02-16 05:49:55.425761 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-16 05:49:55.425772 | orchestrator | Monday 16 February 2026 05:49:41 +0000 (0:00:00.161) 0:03:40.631 ******* 2026-02-16 05:49:55.425782 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.425793 | orchestrator | 2026-02-16 05:49:55.425804 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-16 05:49:55.425814 | orchestrator | Monday 16 February 2026 05:49:41 +0000 (0:00:00.149) 0:03:40.781 ******* 2026-02-16 05:49:55.425825 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.425836 | orchestrator | 2026-02-16 05:49:55.425847 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-16 05:49:55.425857 | orchestrator | Monday 16 February 2026 05:49:41 +0000 (0:00:00.137) 0:03:40.918 ******* 2026-02-16 05:49:55.425868 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.425878 | orchestrator | 2026-02-16 05:49:55.425889 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-16 05:49:55.425900 | orchestrator | Monday 16 February 2026 05:49:42 +0000 (0:00:00.378) 0:03:41.297 ******* 2026-02-16 05:49:55.425921 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:49:55.425942 | orchestrator | 2026-02-16 05:49:55.425960 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-16 05:49:55.425980 | orchestrator | Monday 16 February 2026 05:49:43 +0000 (0:00:01.671) 0:03:42.968 ******* 2026-02-16 05:49:55.425999 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:49:55.426224 | orchestrator | 2026-02-16 05:49:55.426257 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-16 05:49:55.426277 | orchestrator | Monday 16 February 2026 05:49:44 +0000 (0:00:00.137) 0:03:43.106 ******* 2026-02-16 05:49:55.426297 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-16 05:49:55.426317 | orchestrator | 2026-02-16 05:49:55.426337 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-16 05:49:55.426353 | orchestrator | Monday 16 February 2026 05:49:44 +0000 (0:00:00.677) 0:03:43.783 ******* 2026-02-16 05:49:55.426366 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.426376 | orchestrator | 2026-02-16 05:49:55.426387 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-16 05:49:55.426398 | orchestrator | Monday 16 February 2026 05:49:44 +0000 (0:00:00.155) 0:03:43.939 ******* 2026-02-16 05:49:55.426409 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.426420 | orchestrator | 2026-02-16 05:49:55.426431 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-16 05:49:55.426441 | orchestrator | Monday 16 February 2026 05:49:45 +0000 (0:00:00.158) 0:03:44.097 ******* 2026-02-16 05:49:55.426471 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.426493 | orchestrator | 2026-02-16 05:49:55.426505 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-16 05:49:55.426532 | orchestrator | Monday 16 February 2026 05:49:45 +0000 (0:00:00.143) 0:03:44.240 ******* 2026-02-16 05:49:55.426544 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.426555 | orchestrator | 2026-02-16 05:49:55.426566 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-16 05:49:55.426577 | orchestrator | Monday 16 February 2026 05:49:45 +0000 (0:00:00.150) 0:03:44.391 ******* 2026-02-16 05:49:55.426588 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.426599 | orchestrator | 2026-02-16 05:49:55.426622 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-16 05:49:55.426633 | orchestrator | Monday 16 February 2026 05:49:45 +0000 (0:00:00.141) 0:03:44.533 ******* 2026-02-16 05:49:55.426644 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.426654 | orchestrator | 2026-02-16 05:49:55.426665 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-16 05:49:55.426676 | orchestrator | Monday 16 February 2026 05:49:45 +0000 (0:00:00.177) 0:03:44.710 ******* 2026-02-16 05:49:55.426686 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.426697 | orchestrator | 2026-02-16 05:49:55.426708 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-16 05:49:55.426719 | orchestrator | Monday 16 February 2026 05:49:45 +0000 (0:00:00.142) 0:03:44.853 ******* 2026-02-16 05:49:55.426729 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.426740 | orchestrator | 2026-02-16 05:49:55.426751 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-16 05:49:55.426762 | orchestrator | Monday 16 February 2026 05:49:46 +0000 (0:00:00.148) 0:03:45.001 ******* 2026-02-16 05:49:55.426772 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:49:55.426784 | orchestrator | 2026-02-16 05:49:55.426795 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-16 05:49:55.426805 | orchestrator | Monday 16 February 2026 05:49:46 +0000 (0:00:00.451) 0:03:45.453 ******* 2026-02-16 05:49:55.426816 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-16 05:49:55.426828 | orchestrator | 2026-02-16 05:49:55.426863 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-16 05:49:55.426874 | orchestrator | Monday 16 February 2026 05:49:47 +0000 (0:00:00.553) 0:03:46.007 ******* 2026-02-16 05:49:55.426885 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-16 05:49:55.426896 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-16 05:49:55.426907 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-16 05:49:55.426918 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-16 05:49:55.426929 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-16 05:49:55.426939 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-16 05:49:55.426950 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-16 05:49:55.426961 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-16 05:49:55.426972 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-16 05:49:55.426982 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-16 05:49:55.426993 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-16 05:49:55.427004 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-16 05:49:55.427014 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-16 05:49:55.427025 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-16 05:49:55.427036 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-16 05:49:55.427047 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-16 05:49:55.427095 | orchestrator | 2026-02-16 05:49:55.427114 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-16 05:49:55.427132 | orchestrator | Monday 16 February 2026 05:49:53 +0000 (0:00:06.156) 0:03:52.163 ******* 2026-02-16 05:49:55.427150 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.427179 | orchestrator | 2026-02-16 05:49:55.427199 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-16 05:49:55.427219 | orchestrator | Monday 16 February 2026 05:49:53 +0000 (0:00:00.134) 0:03:52.298 ******* 2026-02-16 05:49:55.427238 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.427257 | orchestrator | 2026-02-16 05:49:55.427268 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-16 05:49:55.427289 | orchestrator | Monday 16 February 2026 05:49:53 +0000 (0:00:00.133) 0:03:52.432 ******* 2026-02-16 05:49:55.427301 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.427311 | orchestrator | 2026-02-16 05:49:55.427322 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-16 05:49:55.427333 | orchestrator | Monday 16 February 2026 05:49:53 +0000 (0:00:00.135) 0:03:52.567 ******* 2026-02-16 05:49:55.427344 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.427354 | orchestrator | 2026-02-16 05:49:55.427365 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-16 05:49:55.427376 | orchestrator | Monday 16 February 2026 05:49:53 +0000 (0:00:00.136) 0:03:52.704 ******* 2026-02-16 05:49:55.427386 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.427397 | orchestrator | 2026-02-16 05:49:55.427408 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-16 05:49:55.427419 | orchestrator | Monday 16 February 2026 05:49:53 +0000 (0:00:00.130) 0:03:52.835 ******* 2026-02-16 05:49:55.427429 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.427440 | orchestrator | 2026-02-16 05:49:55.427451 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-16 05:49:55.427461 | orchestrator | Monday 16 February 2026 05:49:53 +0000 (0:00:00.125) 0:03:52.960 ******* 2026-02-16 05:49:55.427472 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.427483 | orchestrator | 2026-02-16 05:49:55.427494 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-16 05:49:55.427512 | orchestrator | Monday 16 February 2026 05:49:54 +0000 (0:00:00.145) 0:03:53.106 ******* 2026-02-16 05:49:55.427523 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.427534 | orchestrator | 2026-02-16 05:49:55.427544 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-16 05:49:55.427555 | orchestrator | Monday 16 February 2026 05:49:54 +0000 (0:00:00.143) 0:03:53.249 ******* 2026-02-16 05:49:55.427566 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.427576 | orchestrator | 2026-02-16 05:49:55.427587 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-16 05:49:55.427598 | orchestrator | Monday 16 February 2026 05:49:54 +0000 (0:00:00.133) 0:03:53.383 ******* 2026-02-16 05:49:55.427609 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.427619 | orchestrator | 2026-02-16 05:49:55.427630 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-16 05:49:55.427641 | orchestrator | Monday 16 February 2026 05:49:54 +0000 (0:00:00.366) 0:03:53.749 ******* 2026-02-16 05:49:55.427652 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.427663 | orchestrator | 2026-02-16 05:49:55.427673 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-16 05:49:55.427684 | orchestrator | Monday 16 February 2026 05:49:54 +0000 (0:00:00.126) 0:03:53.875 ******* 2026-02-16 05:49:55.427694 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.427705 | orchestrator | 2026-02-16 05:49:55.427716 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-16 05:49:55.427727 | orchestrator | Monday 16 February 2026 05:49:55 +0000 (0:00:00.135) 0:03:54.011 ******* 2026-02-16 05:49:55.427737 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.427748 | orchestrator | 2026-02-16 05:49:55.427759 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-16 05:49:55.427770 | orchestrator | Monday 16 February 2026 05:49:55 +0000 (0:00:00.250) 0:03:54.261 ******* 2026-02-16 05:49:55.427780 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:49:55.427795 | orchestrator | 2026-02-16 05:49:55.427825 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-16 05:50:16.058576 | orchestrator | Monday 16 February 2026 05:49:55 +0000 (0:00:00.130) 0:03:54.392 ******* 2026-02-16 05:50:16.058692 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:50:16.058735 | orchestrator | 2026-02-16 05:50:16.058750 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-16 05:50:16.058761 | orchestrator | Monday 16 February 2026 05:49:55 +0000 (0:00:00.218) 0:03:54.611 ******* 2026-02-16 05:50:16.058772 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:50:16.058783 | orchestrator | 2026-02-16 05:50:16.058794 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-16 05:50:16.058805 | orchestrator | Monday 16 February 2026 05:49:55 +0000 (0:00:00.136) 0:03:54.748 ******* 2026-02-16 05:50:16.058815 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:50:16.058826 | orchestrator | 2026-02-16 05:50:16.058838 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-16 05:50:16.058849 | orchestrator | Monday 16 February 2026 05:49:55 +0000 (0:00:00.129) 0:03:54.877 ******* 2026-02-16 05:50:16.058860 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:50:16.058871 | orchestrator | 2026-02-16 05:50:16.058882 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-16 05:50:16.058892 | orchestrator | Monday 16 February 2026 05:49:56 +0000 (0:00:00.147) 0:03:55.025 ******* 2026-02-16 05:50:16.058903 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:50:16.058914 | orchestrator | 2026-02-16 05:50:16.058924 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-16 05:50:16.058935 | orchestrator | Monday 16 February 2026 05:49:56 +0000 (0:00:00.121) 0:03:55.147 ******* 2026-02-16 05:50:16.058946 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:50:16.058956 | orchestrator | 2026-02-16 05:50:16.058967 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-16 05:50:16.058977 | orchestrator | Monday 16 February 2026 05:49:56 +0000 (0:00:00.146) 0:03:55.294 ******* 2026-02-16 05:50:16.058988 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:50:16.058998 | orchestrator | 2026-02-16 05:50:16.059009 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-16 05:50:16.059019 | orchestrator | Monday 16 February 2026 05:49:56 +0000 (0:00:00.134) 0:03:55.429 ******* 2026-02-16 05:50:16.059030 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-16 05:50:16.059075 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-16 05:50:16.059086 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-16 05:50:16.059097 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:50:16.059108 | orchestrator | 2026-02-16 05:50:16.059121 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-16 05:50:16.059135 | orchestrator | Monday 16 February 2026 05:49:57 +0000 (0:00:00.701) 0:03:56.130 ******* 2026-02-16 05:50:16.059147 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-16 05:50:16.059160 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-16 05:50:16.059172 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-16 05:50:16.059184 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:50:16.059197 | orchestrator | 2026-02-16 05:50:16.059209 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-16 05:50:16.059222 | orchestrator | Monday 16 February 2026 05:49:58 +0000 (0:00:00.933) 0:03:57.063 ******* 2026-02-16 05:50:16.059234 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-16 05:50:16.059246 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-16 05:50:16.059259 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-16 05:50:16.059286 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:50:16.059298 | orchestrator | 2026-02-16 05:50:16.059311 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-16 05:50:16.059350 | orchestrator | Monday 16 February 2026 05:49:58 +0000 (0:00:00.413) 0:03:57.476 ******* 2026-02-16 05:50:16.059364 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:50:16.059377 | orchestrator | 2026-02-16 05:50:16.059408 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-16 05:50:16.059422 | orchestrator | Monday 16 February 2026 05:49:58 +0000 (0:00:00.154) 0:03:57.631 ******* 2026-02-16 05:50:16.059435 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-16 05:50:16.059448 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:50:16.059460 | orchestrator | 2026-02-16 05:50:16.059472 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-16 05:50:16.059483 | orchestrator | Monday 16 February 2026 05:49:59 +0000 (0:00:00.607) 0:03:58.239 ******* 2026-02-16 05:50:16.059494 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:50:16.059504 | orchestrator | 2026-02-16 05:50:16.059515 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-16 05:50:16.059532 | orchestrator | Monday 16 February 2026 05:50:00 +0000 (0:00:00.843) 0:03:59.082 ******* 2026-02-16 05:50:16.059552 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:50:16.059584 | orchestrator | 2026-02-16 05:50:16.059604 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-16 05:50:16.059624 | orchestrator | Monday 16 February 2026 05:50:00 +0000 (0:00:00.179) 0:03:59.261 ******* 2026-02-16 05:50:16.059646 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-02-16 05:50:16.059667 | orchestrator | 2026-02-16 05:50:16.059687 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-16 05:50:16.059708 | orchestrator | Monday 16 February 2026 05:50:00 +0000 (0:00:00.616) 0:03:59.878 ******* 2026-02-16 05:50:16.059726 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-16 05:50:16.059745 | orchestrator | 2026-02-16 05:50:16.059765 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-16 05:50:16.059784 | orchestrator | Monday 16 February 2026 05:50:03 +0000 (0:00:02.269) 0:04:02.148 ******* 2026-02-16 05:50:16.059804 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:50:16.059909 | orchestrator | 2026-02-16 05:50:16.059953 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-16 05:50:16.059973 | orchestrator | Monday 16 February 2026 05:50:03 +0000 (0:00:00.173) 0:04:02.321 ******* 2026-02-16 05:50:16.059992 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:50:16.060015 | orchestrator | 2026-02-16 05:50:16.060059 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-16 05:50:16.060080 | orchestrator | Monday 16 February 2026 05:50:03 +0000 (0:00:00.174) 0:04:02.495 ******* 2026-02-16 05:50:16.060098 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:50:16.060117 | orchestrator | 2026-02-16 05:50:16.060135 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-16 05:50:16.060152 | orchestrator | Monday 16 February 2026 05:50:03 +0000 (0:00:00.401) 0:04:02.897 ******* 2026-02-16 05:50:16.060170 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:50:16.060188 | orchestrator | 2026-02-16 05:50:16.060205 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-16 05:50:16.060223 | orchestrator | Monday 16 February 2026 05:50:05 +0000 (0:00:01.136) 0:04:04.034 ******* 2026-02-16 05:50:16.060240 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:50:16.060258 | orchestrator | 2026-02-16 05:50:16.060274 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-16 05:50:16.060293 | orchestrator | Monday 16 February 2026 05:50:05 +0000 (0:00:00.623) 0:04:04.658 ******* 2026-02-16 05:50:16.060310 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:50:16.060327 | orchestrator | 2026-02-16 05:50:16.060345 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-16 05:50:16.060363 | orchestrator | Monday 16 February 2026 05:50:06 +0000 (0:00:00.517) 0:04:05.176 ******* 2026-02-16 05:50:16.060380 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:50:16.060398 | orchestrator | 2026-02-16 05:50:16.060415 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-16 05:50:16.060433 | orchestrator | Monday 16 February 2026 05:50:06 +0000 (0:00:00.510) 0:04:05.687 ******* 2026-02-16 05:50:16.060465 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:50:16.060483 | orchestrator | 2026-02-16 05:50:16.060518 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-16 05:50:16.060538 | orchestrator | Monday 16 February 2026 05:50:07 +0000 (0:00:00.715) 0:04:06.402 ******* 2026-02-16 05:50:16.060556 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:50:16.060575 | orchestrator | 2026-02-16 05:50:16.060593 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-16 05:50:16.060610 | orchestrator | Monday 16 February 2026 05:50:08 +0000 (0:00:00.730) 0:04:07.133 ******* 2026-02-16 05:50:16.060628 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-16 05:50:16.060646 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-16 05:50:16.060664 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-16 05:50:16.060681 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-02-16 05:50:16.060700 | orchestrator | 2026-02-16 05:50:16.060719 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-16 05:50:16.060735 | orchestrator | Monday 16 February 2026 05:50:11 +0000 (0:00:02.912) 0:04:10.045 ******* 2026-02-16 05:50:16.060753 | orchestrator | changed: [testbed-node-0] 2026-02-16 05:50:16.060771 | orchestrator | 2026-02-16 05:50:16.060788 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-16 05:50:16.060807 | orchestrator | Monday 16 February 2026 05:50:12 +0000 (0:00:01.229) 0:04:11.274 ******* 2026-02-16 05:50:16.060824 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:50:16.060843 | orchestrator | 2026-02-16 05:50:16.060862 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-16 05:50:16.060880 | orchestrator | Monday 16 February 2026 05:50:12 +0000 (0:00:00.144) 0:04:11.419 ******* 2026-02-16 05:50:16.060898 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:50:16.060916 | orchestrator | 2026-02-16 05:50:16.060934 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-16 05:50:16.060962 | orchestrator | Monday 16 February 2026 05:50:12 +0000 (0:00:00.146) 0:04:11.565 ******* 2026-02-16 05:50:16.060980 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:50:16.060998 | orchestrator | 2026-02-16 05:50:16.061016 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-16 05:50:16.061058 | orchestrator | Monday 16 February 2026 05:50:13 +0000 (0:00:01.022) 0:04:12.587 ******* 2026-02-16 05:50:16.061080 | orchestrator | ok: [testbed-node-0] 2026-02-16 05:50:16.061098 | orchestrator | 2026-02-16 05:50:16.061116 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-16 05:50:16.061133 | orchestrator | Monday 16 February 2026 05:50:14 +0000 (0:00:00.551) 0:04:13.138 ******* 2026-02-16 05:50:16.061150 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:50:16.061167 | orchestrator | 2026-02-16 05:50:16.061184 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-16 05:50:16.061203 | orchestrator | Monday 16 February 2026 05:50:14 +0000 (0:00:00.391) 0:04:13.530 ******* 2026-02-16 05:50:16.061220 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-02-16 05:50:16.061238 | orchestrator | 2026-02-16 05:50:16.061255 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-16 05:50:16.061273 | orchestrator | Monday 16 February 2026 05:50:15 +0000 (0:00:00.599) 0:04:14.130 ******* 2026-02-16 05:50:16.061290 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:50:16.061307 | orchestrator | 2026-02-16 05:50:16.061324 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-16 05:50:16.061341 | orchestrator | Monday 16 February 2026 05:50:15 +0000 (0:00:00.154) 0:04:14.284 ******* 2026-02-16 05:50:16.061358 | orchestrator | skipping: [testbed-node-0] 2026-02-16 05:50:16.061376 | orchestrator | 2026-02-16 05:50:16.061393 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-16 05:50:16.061410 | orchestrator | Monday 16 February 2026 05:50:15 +0000 (0:00:00.123) 0:04:14.407 ******* 2026-02-16 05:50:16.061443 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-02-16 05:50:16.061461 | orchestrator | 2026-02-16 05:50:16.061498 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-16 06:04:00.688440 | orchestrator | Monday 16 February 2026 05:50:16 +0000 (0:00:00.616) 0:04:15.024 ******* 2026-02-16 06:04:00.688571 | orchestrator | changed: [testbed-node-0] 2026-02-16 06:04:00.688590 | orchestrator | 2026-02-16 06:04:00.688603 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-16 06:04:00.688615 | orchestrator | Monday 16 February 2026 05:50:17 +0000 (0:00:01.449) 0:04:16.474 ******* 2026-02-16 06:04:00.688627 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:04:00.688638 | orchestrator | 2026-02-16 06:04:00.688651 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-16 06:04:00.688662 | orchestrator | Monday 16 February 2026 05:50:18 +0000 (0:00:01.000) 0:04:17.474 ******* 2026-02-16 06:04:00.688673 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:04:00.688684 | orchestrator | 2026-02-16 06:04:00.688764 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-16 06:04:00.688779 | orchestrator | Monday 16 February 2026 05:50:19 +0000 (0:00:01.413) 0:04:18.888 ******* 2026-02-16 06:04:00.688790 | orchestrator | changed: [testbed-node-0] 2026-02-16 06:04:00.688801 | orchestrator | 2026-02-16 06:04:00.688813 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-16 06:04:00.688824 | orchestrator | Monday 16 February 2026 05:50:22 +0000 (0:00:02.394) 0:04:21.282 ******* 2026-02-16 06:04:00.688835 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-02-16 06:04:00.688847 | orchestrator | 2026-02-16 06:04:00.688858 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-16 06:04:00.688869 | orchestrator | Monday 16 February 2026 05:50:22 +0000 (0:00:00.582) 0:04:21.865 ******* 2026-02-16 06:04:00.688880 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-16 06:04:00.688892 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:04:00.688903 | orchestrator | 2026-02-16 06:04:00.688913 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-16 06:04:00.688924 | orchestrator | Monday 16 February 2026 05:50:45 +0000 (0:00:22.307) 0:04:44.173 ******* 2026-02-16 06:04:00.688935 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:04:00.688947 | orchestrator | 2026-02-16 06:04:00.688969 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-16 06:04:00.688992 | orchestrator | Monday 16 February 2026 05:50:47 +0000 (0:00:02.188) 0:04:46.361 ******* 2026-02-16 06:04:00.689015 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:04:00.689037 | orchestrator | 2026-02-16 06:04:00.689052 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-16 06:04:00.689075 | orchestrator | Monday 16 February 2026 05:50:47 +0000 (0:00:00.133) 0:04:46.495 ******* 2026-02-16 06:04:00.689089 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__914de6c9ac680119b10f48c8706e0b5d20c94ff3'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-16 06:04:00.689103 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__914de6c9ac680119b10f48c8706e0b5d20c94ff3'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-16 06:04:00.689133 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__914de6c9ac680119b10f48c8706e0b5d20c94ff3'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-16 06:04:00.689170 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__914de6c9ac680119b10f48c8706e0b5d20c94ff3'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-16 06:04:00.689183 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__914de6c9ac680119b10f48c8706e0b5d20c94ff3'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-16 06:04:00.689213 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__914de6c9ac680119b10f48c8706e0b5d20c94ff3'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__914de6c9ac680119b10f48c8706e0b5d20c94ff3'}])  2026-02-16 06:04:00.689227 | orchestrator | 2026-02-16 06:04:00.689238 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-16 06:04:00.689249 | orchestrator | Monday 16 February 2026 05:50:57 +0000 (0:00:09.629) 0:04:56.124 ******* 2026-02-16 06:04:00.689260 | orchestrator | changed: [testbed-node-0] 2026-02-16 06:04:00.689270 | orchestrator | 2026-02-16 06:04:00.689281 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-16 06:04:00.689292 | orchestrator | Monday 16 February 2026 05:50:58 +0000 (0:00:01.474) 0:04:57.599 ******* 2026-02-16 06:04:00.689303 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 06:04:00.689314 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-16 06:04:00.689325 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-16 06:04:00.689336 | orchestrator | 2026-02-16 06:04:00.689346 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-16 06:04:00.689357 | orchestrator | Monday 16 February 2026 05:50:59 +0000 (0:00:01.151) 0:04:58.750 ******* 2026-02-16 06:04:00.689368 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-16 06:04:00.689379 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-16 06:04:00.689389 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-16 06:04:00.689400 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:04:00.689411 | orchestrator | 2026-02-16 06:04:00.689421 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-16 06:04:00.689432 | orchestrator | Monday 16 February 2026 05:51:00 +0000 (0:00:00.461) 0:04:59.212 ******* 2026-02-16 06:04:00.689443 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:04:00.689454 | orchestrator | 2026-02-16 06:04:00.689465 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-16 06:04:00.689475 | orchestrator | Monday 16 February 2026 05:51:00 +0000 (0:00:00.120) 0:04:59.332 ******* 2026-02-16 06:04:00.689486 | orchestrator | 2026-02-16 06:04:00.689497 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.689508 | orchestrator | 2026-02-16 06:04:00.689519 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.689530 | orchestrator | 2026-02-16 06:04:00.689541 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.689552 | orchestrator | 2026-02-16 06:04:00.689563 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.689583 | orchestrator | 2026-02-16 06:04:00.689594 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.689605 | orchestrator | 2026-02-16 06:04:00.689617 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.689638 | orchestrator | 2026-02-16 06:04:00.689658 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.689681 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (5 retries left). 2026-02-16 06:04:00.689731 | orchestrator | 2026-02-16 06:04:00.689748 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.689764 | orchestrator | 2026-02-16 06:04:00.689781 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.689799 | orchestrator | 2026-02-16 06:04:00.689818 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.689837 | orchestrator | 2026-02-16 06:04:00.689865 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.689884 | orchestrator | 2026-02-16 06:04:00.689903 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.689923 | orchestrator | 2026-02-16 06:04:00.689942 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.689961 | orchestrator | 2026-02-16 06:04:00.689975 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.689985 | orchestrator | 2026-02-16 06:04:00.689996 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.690007 | orchestrator | 2026-02-16 06:04:00.690072 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.690086 | orchestrator | 2026-02-16 06:04:00.690097 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.690108 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (4 retries left). 2026-02-16 06:04:00.690119 | orchestrator | 2026-02-16 06:04:00.690129 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.690140 | orchestrator | 2026-02-16 06:04:00.690151 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.690162 | orchestrator | 2026-02-16 06:04:00.690173 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.690183 | orchestrator | 2026-02-16 06:04:00.690194 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.690205 | orchestrator | 2026-02-16 06:04:00.690216 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:04:00.690226 | orchestrator | 2026-02-16 06:04:00.690248 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826155 | orchestrator | 2026-02-16 06:22:22.826273 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826291 | orchestrator | 2026-02-16 06:22:22.826304 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826315 | orchestrator | 2026-02-16 06:22:22.826326 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826363 | orchestrator | 2026-02-16 06:22:22.826375 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826386 | orchestrator | 2026-02-16 06:22:22.826396 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826408 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (3 retries left). 2026-02-16 06:22:22.826420 | orchestrator | 2026-02-16 06:22:22.826431 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826441 | orchestrator | 2026-02-16 06:22:22.826452 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826463 | orchestrator | 2026-02-16 06:22:22.826473 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826484 | orchestrator | 2026-02-16 06:22:22.826495 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826506 | orchestrator | 2026-02-16 06:22:22.826517 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826528 | orchestrator | 2026-02-16 06:22:22.826539 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826549 | orchestrator | 2026-02-16 06:22:22.826560 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826571 | orchestrator | 2026-02-16 06:22:22.826581 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826592 | orchestrator | 2026-02-16 06:22:22.826603 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826614 | orchestrator | 2026-02-16 06:22:22.826624 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826635 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (2 retries left). 2026-02-16 06:22:22.826646 | orchestrator | 2026-02-16 06:22:22.826657 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826667 | orchestrator | 2026-02-16 06:22:22.826678 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826689 | orchestrator | 2026-02-16 06:22:22.826699 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826710 | orchestrator | 2026-02-16 06:22:22.826721 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826732 | orchestrator | 2026-02-16 06:22:22.826742 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826753 | orchestrator | 2026-02-16 06:22:22.826764 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826774 | orchestrator | 2026-02-16 06:22:22.826800 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826811 | orchestrator | 2026-02-16 06:22:22.826822 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826832 | orchestrator | 2026-02-16 06:22:22.826843 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826854 | orchestrator | 2026-02-16 06:22:22.826865 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826876 | orchestrator | 2026-02-16 06:22:22.826887 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826907 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (1 retries left). 2026-02-16 06:22:22.826918 | orchestrator | 2026-02-16 06:22:22.826928 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826939 | orchestrator | 2026-02-16 06:22:22.826950 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826961 | orchestrator | 2026-02-16 06:22:22.826971 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.826982 | orchestrator | 2026-02-16 06:22:22.826993 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.827003 | orchestrator | 2026-02-16 06:22:22.827014 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.827048 | orchestrator | 2026-02-16 06:22:22.827060 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.827070 | orchestrator | 2026-02-16 06:22:22.827081 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.827175 | orchestrator | 2026-02-16 06:22:22.827212 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.827224 | orchestrator | 2026-02-16 06:22:22.827235 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.827245 | orchestrator | 2026-02-16 06:22:22.827256 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:22:22.827267 | orchestrator | [WARNING]: Failure using method (v2_runner_on_failed) in callback plugin 2026-02-16 06:22:22.827279 | orchestrator | (): '57219600-9625-e58c-1753-000000000297' 2026-02-16 06:22:22.827306 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"attempts": 5, "changed": false, "cmd": ["docker", "exec", "ceph-mon-testbed-node-0", "ceph", "--cluster", "ceph", "-m", "192.168.16.8", "quorum_status", "--format", "json"], "delta": "0:05:00.336466", "end": "2026-02-16 06:22:19.413501", "msg": "non-zero return code", "rc": 1, "start": "2026-02-16 06:17:19.077035", "stderr": "2026-02-16T06:22:19.394+0000 77fa7a521640 0 monclient(hunting): authenticate timed out after 300\n[errno 110] RADOS timed out (error connecting to the cluster)", "stderr_lines": ["2026-02-16T06:22:19.394+0000 77fa7a521640 0 monclient(hunting): authenticate timed out after 300", "[errno 110] RADOS timed out (error connecting to the cluster)"], "stdout": "", "stdout_lines": []} 2026-02-16 06:22:22.827320 | orchestrator | 2026-02-16 06:22:22.827331 | orchestrator | TASK [Unmask the mon service] ************************************************** 2026-02-16 06:22:22.827343 | orchestrator | Monday 16 February 2026 06:22:19 +0000 (0:31:19.295) 0:36:18.627 ******* 2026-02-16 06:22:22.827353 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:22:22.827365 | orchestrator | 2026-02-16 06:22:22.827376 | orchestrator | TASK [Unmask the mgr service] ************************************************** 2026-02-16 06:22:22.827387 | orchestrator | Monday 16 February 2026 06:22:20 +0000 (0:00:00.877) 0:36:19.505 ******* 2026-02-16 06:22:22.827397 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:22:22.827408 | orchestrator | 2026-02-16 06:22:22.827418 | orchestrator | TASK [Stop the playbook execution] ********************************************* 2026-02-16 06:22:22.827429 | orchestrator | Monday 16 February 2026 06:22:21 +0000 (0:00:01.093) 0:36:20.599 ******* 2026-02-16 06:22:22.827440 | orchestrator | [WARNING]: Failure using method (v2_runner_on_failed) in callback plugin 2026-02-16 06:22:22.827450 | orchestrator | (): '57219600-9625-e58c-1753-0000000002a2' 2026-02-16 06:22:22.827481 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "There was an error during monitor upgrade. Please, check the previous task results."} 2026-02-16 06:22:22.827492 | orchestrator | 2026-02-16 06:22:22.827503 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 06:22:22.827514 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 06:22:22.827524 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-16 06:22:22.827542 | orchestrator | testbed-node-0 : ok=121  changed=10  unreachable=0 failed=1  skipped=164  rescued=1  ignored=0 2026-02-16 06:22:22.827554 | orchestrator | testbed-node-1 : ok=25  changed=2  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-16 06:22:22.827565 | orchestrator | testbed-node-2 : ok=25  changed=2  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-16 06:22:22.827575 | orchestrator | testbed-node-3 : ok=33  changed=2  unreachable=0 failed=0 skipped=74  rescued=0 ignored=0 2026-02-16 06:22:22.827586 | orchestrator | testbed-node-4 : ok=33  changed=2  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-02-16 06:22:22.827597 | orchestrator | testbed-node-5 : ok=33  changed=2  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-02-16 06:22:22.827607 | orchestrator | 2026-02-16 06:22:22.827618 | orchestrator | 2026-02-16 06:22:22.827629 | orchestrator | 2026-02-16 06:22:22.827640 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 06:22:22.827651 | orchestrator | Monday 16 February 2026 06:22:22 +0000 (0:00:01.190) 0:36:21.790 ******* 2026-02-16 06:22:22.827661 | orchestrator | =============================================================================== 2026-02-16 06:22:22.827672 | orchestrator | Container | waiting for the containerized monitor to join the quorum... 1879.30s 2026-02-16 06:22:22.827682 | orchestrator | Gather and delegate facts ---------------------------------------------- 29.76s 2026-02-16 06:22:22.827693 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.31s 2026-02-16 06:22:22.827711 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 12.79s 2026-02-16 06:22:23.346978 | orchestrator | 2026-02-16 06:22:23 | INFO  | Task a8ad6d5a-2d8c-4871-b8e0-f35758f348e5 (ceph-rolling_update) was prepared for execution. 2026-02-16 06:22:23.347134 | orchestrator | 2026-02-16 06:22:23 | INFO  | It takes a moment until task a8ad6d5a-2d8c-4871-b8e0-f35758f348e5 (ceph-rolling_update) has been started and output is visible here. 2026-02-16 06:23:17.776916 | orchestrator | Set cluster configs ----------------------------------------------------- 9.96s 2026-02-16 06:23:17.777133 | orchestrator | ceph-mon : Set cluster configs ------------------------------------------ 9.63s 2026-02-16 06:23:17.777156 | orchestrator | ceph-infra : Update cache for Debian based OSs -------------------------- 7.44s 2026-02-16 06:23:17.777169 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.16s 2026-02-16 06:23:17.777180 | orchestrator | Gather facts ------------------------------------------------------------ 3.93s 2026-02-16 06:23:17.777192 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 2.91s 2026-02-16 06:23:17.777203 | orchestrator | Stop ceph mon ----------------------------------------------------------- 2.90s 2026-02-16 06:23:17.777213 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 2.40s 2026-02-16 06:23:17.777249 | orchestrator | ceph-mon : Start the monitor service ------------------------------------ 2.39s 2026-02-16 06:23:17.777261 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.31s 2026-02-16 06:23:17.777272 | orchestrator | ceph-infra : Add logrotate configuration -------------------------------- 2.29s 2026-02-16 06:23:17.777282 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.28s 2026-02-16 06:23:17.777293 | orchestrator | ceph-mon : Check if monitor initial keyring already exists -------------- 2.27s 2026-02-16 06:23:17.777303 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.26s 2026-02-16 06:23:17.777314 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 2.19s 2026-02-16 06:23:17.777325 | orchestrator | ceph-validate : Include check_system.yml -------------------------------- 2.08s 2026-02-16 06:23:17.777336 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-16 06:23:17.777348 | orchestrator | 2.16.14 2026-02-16 06:23:17.777362 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-16 06:23:17.777374 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-16 06:23:17.777396 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-16 06:23:17.777406 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-16 06:23:17.777428 | orchestrator | 2026-02-16 06:23:17.777440 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-02-16 06:23:17.777453 | orchestrator | 2026-02-16 06:23:17.777465 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-02-16 06:23:17.777478 | orchestrator | Monday 16 February 2026 06:22:30 +0000 (0:00:01.092) 0:00:01.092 ******* 2026-02-16 06:23:17.777490 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-02-16 06:23:17.777517 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-02-16 06:23:17.777531 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-02-16 06:23:17.777543 | orchestrator | skipping: [localhost] 2026-02-16 06:23:17.777556 | orchestrator | 2026-02-16 06:23:17.777568 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-02-16 06:23:17.777581 | orchestrator | 2026-02-16 06:23:17.777594 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-02-16 06:23:17.777606 | orchestrator | Monday 16 February 2026 06:22:31 +0000 (0:00:00.899) 0:00:01.992 ******* 2026-02-16 06:23:17.777619 | orchestrator | ok: [testbed-node-0] => { 2026-02-16 06:23:17.777631 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-16 06:23:17.777643 | orchestrator | } 2026-02-16 06:23:17.777655 | orchestrator | ok: [testbed-node-1] => { 2026-02-16 06:23:17.777668 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-16 06:23:17.777679 | orchestrator | } 2026-02-16 06:23:17.777691 | orchestrator | ok: [testbed-node-2] => { 2026-02-16 06:23:17.777704 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-16 06:23:17.777716 | orchestrator | } 2026-02-16 06:23:17.777728 | orchestrator | ok: [testbed-node-3] => { 2026-02-16 06:23:17.777740 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-16 06:23:17.777752 | orchestrator | } 2026-02-16 06:23:17.777764 | orchestrator | ok: [testbed-node-4] => { 2026-02-16 06:23:17.777776 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-16 06:23:17.777789 | orchestrator | } 2026-02-16 06:23:17.777801 | orchestrator | ok: [testbed-node-5] => { 2026-02-16 06:23:17.777813 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-16 06:23:17.777833 | orchestrator | } 2026-02-16 06:23:17.777844 | orchestrator | ok: [testbed-manager] => { 2026-02-16 06:23:17.777855 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-16 06:23:17.777866 | orchestrator | } 2026-02-16 06:23:17.777877 | orchestrator | 2026-02-16 06:23:17.777887 | orchestrator | TASK [Gather facts] ************************************************************ 2026-02-16 06:23:17.777898 | orchestrator | Monday 16 February 2026 06:22:32 +0000 (0:00:01.903) 0:00:03.896 ******* 2026-02-16 06:23:17.777909 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:17.777920 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:23:17.777931 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:23:17.777941 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:17.777952 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:23:17.777963 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:23:17.777973 | orchestrator | ok: [testbed-manager] 2026-02-16 06:23:17.777997 | orchestrator | 2026-02-16 06:23:17.778009 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-02-16 06:23:17.778151 | orchestrator | Monday 16 February 2026 06:22:36 +0000 (0:00:03.667) 0:00:07.563 ******* 2026-02-16 06:23:17.778165 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-16 06:23:17.778176 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 06:23:17.778187 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 06:23:17.778198 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-16 06:23:17.778209 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-16 06:23:17.778219 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-16 06:23:17.778230 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 06:23:17.778241 | orchestrator | 2026-02-16 06:23:17.778252 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-02-16 06:23:17.778263 | orchestrator | Monday 16 February 2026 06:23:07 +0000 (0:00:31.119) 0:00:38.683 ******* 2026-02-16 06:23:17.778273 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:23:17.778284 | orchestrator | ok: [testbed-node-1] 2026-02-16 06:23:17.778295 | orchestrator | ok: [testbed-node-2] 2026-02-16 06:23:17.778306 | orchestrator | ok: [testbed-node-3] 2026-02-16 06:23:17.778316 | orchestrator | ok: [testbed-node-4] 2026-02-16 06:23:17.778327 | orchestrator | ok: [testbed-node-5] 2026-02-16 06:23:17.778337 | orchestrator | ok: [testbed-manager] 2026-02-16 06:23:17.778348 | orchestrator | 2026-02-16 06:23:17.778359 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-16 06:23:17.778370 | orchestrator | Monday 16 February 2026 06:23:08 +0000 (0:00:00.896) 0:00:39.579 ******* 2026-02-16 06:23:17.778382 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-16 06:23:17.778395 | orchestrator | 2026-02-16 06:23:17.778406 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-16 06:23:17.778417 | orchestrator | Monday 16 February 2026 06:23:10 +0000 (0:00:01.842) 0:00:41.422 ******* 2026-02-16 06:23:17.778427 | orchestrator | ok: [testbed-node-1] 2026-02-16 06:23:17.778438 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:23:17.778449 | orchestrator | ok: [testbed-node-2] 2026-02-16 06:23:17.778459 | orchestrator | ok: [testbed-node-3] 2026-02-16 06:23:17.778470 | orchestrator | ok: [testbed-node-4] 2026-02-16 06:23:17.778481 | orchestrator | ok: [testbed-node-5] 2026-02-16 06:23:17.778491 | orchestrator | ok: [testbed-manager] 2026-02-16 06:23:17.778502 | orchestrator | 2026-02-16 06:23:17.778513 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-16 06:23:17.778524 | orchestrator | Monday 16 February 2026 06:23:11 +0000 (0:00:01.421) 0:00:42.843 ******* 2026-02-16 06:23:17.778535 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:23:17.778554 | orchestrator | ok: [testbed-node-1] 2026-02-16 06:23:17.778572 | orchestrator | ok: [testbed-node-2] 2026-02-16 06:23:17.778593 | orchestrator | ok: [testbed-node-3] 2026-02-16 06:23:17.778613 | orchestrator | ok: [testbed-node-4] 2026-02-16 06:23:17.778632 | orchestrator | ok: [testbed-node-5] 2026-02-16 06:23:17.778650 | orchestrator | ok: [testbed-manager] 2026-02-16 06:23:17.778669 | orchestrator | 2026-02-16 06:23:17.778688 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-16 06:23:17.778716 | orchestrator | Monday 16 February 2026 06:23:12 +0000 (0:00:00.732) 0:00:43.576 ******* 2026-02-16 06:23:17.778736 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:23:17.778756 | orchestrator | ok: [testbed-node-1] 2026-02-16 06:23:17.778776 | orchestrator | ok: [testbed-node-2] 2026-02-16 06:23:17.778797 | orchestrator | ok: [testbed-node-3] 2026-02-16 06:23:17.778816 | orchestrator | ok: [testbed-node-4] 2026-02-16 06:23:17.778834 | orchestrator | ok: [testbed-node-5] 2026-02-16 06:23:17.778854 | orchestrator | ok: [testbed-manager] 2026-02-16 06:23:17.778874 | orchestrator | 2026-02-16 06:23:17.778894 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-16 06:23:17.778913 | orchestrator | Monday 16 February 2026 06:23:13 +0000 (0:00:01.360) 0:00:44.937 ******* 2026-02-16 06:23:17.778931 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:23:17.778951 | orchestrator | ok: [testbed-node-1] 2026-02-16 06:23:17.778971 | orchestrator | ok: [testbed-node-2] 2026-02-16 06:23:17.778989 | orchestrator | ok: [testbed-node-3] 2026-02-16 06:23:17.779009 | orchestrator | ok: [testbed-node-4] 2026-02-16 06:23:17.779054 | orchestrator | ok: [testbed-node-5] 2026-02-16 06:23:17.779075 | orchestrator | ok: [testbed-manager] 2026-02-16 06:23:17.779089 | orchestrator | 2026-02-16 06:23:17.779107 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-16 06:23:17.779125 | orchestrator | Monday 16 February 2026 06:23:14 +0000 (0:00:00.814) 0:00:45.751 ******* 2026-02-16 06:23:17.779142 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:23:17.779160 | orchestrator | ok: [testbed-node-1] 2026-02-16 06:23:17.779179 | orchestrator | ok: [testbed-node-2] 2026-02-16 06:23:17.779198 | orchestrator | ok: [testbed-node-3] 2026-02-16 06:23:17.779217 | orchestrator | ok: [testbed-node-4] 2026-02-16 06:23:17.779236 | orchestrator | ok: [testbed-node-5] 2026-02-16 06:23:17.779254 | orchestrator | ok: [testbed-manager] 2026-02-16 06:23:17.779272 | orchestrator | 2026-02-16 06:23:17.779291 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-16 06:23:17.779309 | orchestrator | Monday 16 February 2026 06:23:15 +0000 (0:00:01.112) 0:00:46.863 ******* 2026-02-16 06:23:17.779328 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:23:17.779345 | orchestrator | ok: [testbed-node-1] 2026-02-16 06:23:17.779362 | orchestrator | ok: [testbed-node-2] 2026-02-16 06:23:17.779379 | orchestrator | ok: [testbed-node-3] 2026-02-16 06:23:17.779397 | orchestrator | ok: [testbed-node-4] 2026-02-16 06:23:17.779415 | orchestrator | ok: [testbed-node-5] 2026-02-16 06:23:17.779434 | orchestrator | ok: [testbed-manager] 2026-02-16 06:23:17.779451 | orchestrator | 2026-02-16 06:23:17.779469 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-16 06:23:17.779486 | orchestrator | Monday 16 February 2026 06:23:16 +0000 (0:00:00.761) 0:00:47.625 ******* 2026-02-16 06:23:17.779504 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:17.779521 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:23:17.779537 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:23:17.779554 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:17.779572 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:23:17.779589 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:23:17.779627 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:23:30.564152 | orchestrator | 2026-02-16 06:23:30.564266 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-16 06:23:30.564283 | orchestrator | Monday 16 February 2026 06:23:17 +0000 (0:00:01.077) 0:00:48.703 ******* 2026-02-16 06:23:30.564319 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:23:30.564331 | orchestrator | ok: [testbed-node-1] 2026-02-16 06:23:30.564340 | orchestrator | ok: [testbed-node-2] 2026-02-16 06:23:30.564350 | orchestrator | ok: [testbed-node-3] 2026-02-16 06:23:30.564360 | orchestrator | ok: [testbed-node-4] 2026-02-16 06:23:30.564369 | orchestrator | ok: [testbed-node-5] 2026-02-16 06:23:30.564379 | orchestrator | ok: [testbed-manager] 2026-02-16 06:23:30.564388 | orchestrator | 2026-02-16 06:23:30.564398 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-16 06:23:30.564408 | orchestrator | Monday 16 February 2026 06:23:18 +0000 (0:00:00.977) 0:00:49.680 ******* 2026-02-16 06:23:30.564418 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 06:23:30.564428 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 06:23:30.564438 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 06:23:30.564447 | orchestrator | 2026-02-16 06:23:30.564457 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-16 06:23:30.564467 | orchestrator | Monday 16 February 2026 06:23:19 +0000 (0:00:00.670) 0:00:50.350 ******* 2026-02-16 06:23:30.564476 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:23:30.564486 | orchestrator | ok: [testbed-node-1] 2026-02-16 06:23:30.564495 | orchestrator | ok: [testbed-node-2] 2026-02-16 06:23:30.564504 | orchestrator | ok: [testbed-node-3] 2026-02-16 06:23:30.564515 | orchestrator | ok: [testbed-node-4] 2026-02-16 06:23:30.564524 | orchestrator | ok: [testbed-node-5] 2026-02-16 06:23:30.564534 | orchestrator | ok: [testbed-manager] 2026-02-16 06:23:30.564544 | orchestrator | 2026-02-16 06:23:30.564553 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-16 06:23:30.564563 | orchestrator | Monday 16 February 2026 06:23:20 +0000 (0:00:00.949) 0:00:51.299 ******* 2026-02-16 06:23:30.564572 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 06:23:30.564582 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 06:23:30.564592 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 06:23:30.564601 | orchestrator | 2026-02-16 06:23:30.564611 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-16 06:23:30.564620 | orchestrator | Monday 16 February 2026 06:23:22 +0000 (0:00:02.362) 0:00:53.662 ******* 2026-02-16 06:23:30.564630 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-16 06:23:30.564640 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-16 06:23:30.564651 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-16 06:23:30.564663 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:30.564674 | orchestrator | 2026-02-16 06:23:30.564684 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-16 06:23:30.564695 | orchestrator | Monday 16 February 2026 06:23:23 +0000 (0:00:00.440) 0:00:54.103 ******* 2026-02-16 06:23:30.564722 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-16 06:23:30.564737 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-16 06:23:30.564749 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-16 06:23:30.564760 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:30.564771 | orchestrator | 2026-02-16 06:23:30.564782 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-16 06:23:30.564799 | orchestrator | Monday 16 February 2026 06:23:24 +0000 (0:00:00.922) 0:00:55.025 ******* 2026-02-16 06:23:30.564810 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:30.564823 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:30.564850 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:30.564861 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:30.564871 | orchestrator | 2026-02-16 06:23:30.564881 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-16 06:23:30.564891 | orchestrator | Monday 16 February 2026 06:23:24 +0000 (0:00:00.180) 0:00:55.206 ******* 2026-02-16 06:23:30.564903 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '94fb026fda3b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-16 06:23:21.055310', 'end': '2026-02-16 06:23:21.102761', 'delta': '0:00:00.047451', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['94fb026fda3b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-16 06:23:30.564915 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '8a5d26661ef8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-16 06:23:21.951424', 'end': '2026-02-16 06:23:22.036735', 'delta': '0:00:00.085311', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8a5d26661ef8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-16 06:23:30.564930 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '6720fcec1b21', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-16 06:23:22.536148', 'end': '2026-02-16 06:23:22.576260', 'delta': '0:00:00.040112', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6720fcec1b21'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-16 06:23:30.564947 | orchestrator | 2026-02-16 06:23:30.564957 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-16 06:23:30.564966 | orchestrator | Monday 16 February 2026 06:23:24 +0000 (0:00:00.607) 0:00:55.814 ******* 2026-02-16 06:23:30.564976 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:23:30.564985 | orchestrator | ok: [testbed-node-1] 2026-02-16 06:23:30.565011 | orchestrator | ok: [testbed-node-2] 2026-02-16 06:23:30.565041 | orchestrator | ok: [testbed-node-3] 2026-02-16 06:23:30.565051 | orchestrator | ok: [testbed-node-4] 2026-02-16 06:23:30.565061 | orchestrator | ok: [testbed-node-5] 2026-02-16 06:23:30.565070 | orchestrator | ok: [testbed-manager] 2026-02-16 06:23:30.565080 | orchestrator | 2026-02-16 06:23:30.565089 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-16 06:23:30.565099 | orchestrator | Monday 16 February 2026 06:23:25 +0000 (0:00:01.008) 0:00:56.822 ******* 2026-02-16 06:23:30.565108 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:30.565118 | orchestrator | 2026-02-16 06:23:30.565127 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-16 06:23:30.565138 | orchestrator | Monday 16 February 2026 06:23:26 +0000 (0:00:00.268) 0:00:57.090 ******* 2026-02-16 06:23:30.565147 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:23:30.565157 | orchestrator | ok: [testbed-node-1] 2026-02-16 06:23:30.565166 | orchestrator | ok: [testbed-node-2] 2026-02-16 06:23:30.565176 | orchestrator | ok: [testbed-node-3] 2026-02-16 06:23:30.565185 | orchestrator | ok: [testbed-node-4] 2026-02-16 06:23:30.565194 | orchestrator | ok: [testbed-node-5] 2026-02-16 06:23:30.565204 | orchestrator | ok: [testbed-manager] 2026-02-16 06:23:30.565213 | orchestrator | 2026-02-16 06:23:30.565223 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-16 06:23:30.565232 | orchestrator | Monday 16 February 2026 06:23:27 +0000 (0:00:00.992) 0:00:58.083 ******* 2026-02-16 06:23:30.565242 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:23:30.565251 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-16 06:23:30.565260 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-16 06:23:30.565270 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-16 06:23:30.565286 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-16 06:23:38.987148 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-16 06:23:38.987244 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-16 06:23:38.987258 | orchestrator | 2026-02-16 06:23:38.987270 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-16 06:23:38.987281 | orchestrator | Monday 16 February 2026 06:23:30 +0000 (0:00:03.406) 0:01:01.490 ******* 2026-02-16 06:23:38.987297 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:23:38.987315 | orchestrator | ok: [testbed-node-1] 2026-02-16 06:23:38.987332 | orchestrator | ok: [testbed-node-2] 2026-02-16 06:23:38.987348 | orchestrator | ok: [testbed-node-3] 2026-02-16 06:23:38.987365 | orchestrator | ok: [testbed-node-4] 2026-02-16 06:23:38.987382 | orchestrator | ok: [testbed-node-5] 2026-02-16 06:23:38.987399 | orchestrator | ok: [testbed-manager] 2026-02-16 06:23:38.987416 | orchestrator | 2026-02-16 06:23:38.987432 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-16 06:23:38.987448 | orchestrator | Monday 16 February 2026 06:23:31 +0000 (0:00:01.029) 0:01:02.520 ******* 2026-02-16 06:23:38.987464 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:38.987481 | orchestrator | 2026-02-16 06:23:38.987497 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-16 06:23:38.987514 | orchestrator | Monday 16 February 2026 06:23:31 +0000 (0:00:00.127) 0:01:02.647 ******* 2026-02-16 06:23:38.987529 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:38.987538 | orchestrator | 2026-02-16 06:23:38.987547 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-16 06:23:38.987556 | orchestrator | Monday 16 February 2026 06:23:31 +0000 (0:00:00.227) 0:01:02.875 ******* 2026-02-16 06:23:38.987591 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:38.987601 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:23:38.987609 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:23:38.987618 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:38.987626 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:23:38.987638 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:23:38.987653 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:23:38.987667 | orchestrator | 2026-02-16 06:23:38.987681 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-16 06:23:38.987696 | orchestrator | Monday 16 February 2026 06:23:33 +0000 (0:00:01.346) 0:01:04.221 ******* 2026-02-16 06:23:38.987711 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:38.987724 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:23:38.987739 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:23:38.987754 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:38.987769 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:23:38.987785 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:23:38.987801 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:23:38.987815 | orchestrator | 2026-02-16 06:23:38.987829 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-16 06:23:38.987839 | orchestrator | Monday 16 February 2026 06:23:34 +0000 (0:00:00.801) 0:01:05.022 ******* 2026-02-16 06:23:38.987849 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:38.987858 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:23:38.987868 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:23:38.987878 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:38.987888 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:23:38.987898 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:23:38.987907 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:23:38.987915 | orchestrator | 2026-02-16 06:23:38.987924 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-16 06:23:38.987933 | orchestrator | Monday 16 February 2026 06:23:35 +0000 (0:00:01.041) 0:01:06.064 ******* 2026-02-16 06:23:38.987941 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:38.987950 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:23:38.987958 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:23:38.987967 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:38.987975 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:23:38.987984 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:23:38.987992 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:23:38.988001 | orchestrator | 2026-02-16 06:23:38.988009 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-16 06:23:38.988018 | orchestrator | Monday 16 February 2026 06:23:35 +0000 (0:00:00.839) 0:01:06.904 ******* 2026-02-16 06:23:38.988051 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:38.988061 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:23:38.988069 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:23:38.988078 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:38.988086 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:23:38.988095 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:23:38.988104 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:23:38.988112 | orchestrator | 2026-02-16 06:23:38.988121 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-16 06:23:38.988130 | orchestrator | Monday 16 February 2026 06:23:36 +0000 (0:00:01.033) 0:01:07.937 ******* 2026-02-16 06:23:38.988138 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:38.988147 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:23:38.988155 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:23:38.988164 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:38.988172 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:23:38.988181 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:23:38.988198 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:23:38.988206 | orchestrator | 2026-02-16 06:23:38.988215 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-16 06:23:38.988225 | orchestrator | Monday 16 February 2026 06:23:37 +0000 (0:00:00.719) 0:01:08.657 ******* 2026-02-16 06:23:38.988233 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:38.988242 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:23:38.988250 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:23:38.988258 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:38.988267 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:23:38.988275 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:23:38.988284 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:23:38.988292 | orchestrator | 2026-02-16 06:23:38.988301 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-16 06:23:38.988328 | orchestrator | Monday 16 February 2026 06:23:38 +0000 (0:00:01.016) 0:01:09.673 ******* 2026-02-16 06:23:38.988339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:38.988351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:38.988360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:38.988411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-26-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-16 06:23:38.988428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:38.988437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:38.988446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:38.988472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2335e156', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part16', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part14', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part15', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part1', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 06:23:39.150459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.150565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.150592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.150599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.150625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.150633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-16 06:23:39.150641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.150647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.150653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.150683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd4296cc6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 06:23:39.150696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.150702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.150708 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:39.150716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.150723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.150734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.298346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-28-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-16 06:23:39.298453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.298468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.298496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.298509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c7144733', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part16', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part14', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part15', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part1', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 06:23:39.298520 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:23:39.298548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.298557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.298570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.298585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--50d7a967--e09e--512a--aa83--aa9bbdf9ab74-osd--block--50d7a967--e09e--512a--aa83--aa9bbdf9ab74', 'dm-uuid-LVM-2dhVtclKCjfsjMcDe2D03F1qrxXtffQzYuMeigkCrxOY0hLAH1gOwaoo3bAqwsvb'], 'uuids': ['b3748582-e358-45b0-b8aa-f881226dc8da'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '51f5f49d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['YuMeig-kCrx-OY0h-LAH1-gOwa-oo3b-Aqwsvb']}})  2026-02-16 06:23:39.298595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_843bc551-f5ad-4319-82ad-d411f9295fd2', 'scsi-SQEMU_QEMU_HARDDISK_843bc551-f5ad-4319-82ad-d411f9295fd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '843bc551', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 06:23:39.298605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1ITxS0-SFz0-FdlF-VzSF-Uv8m-y10A-m0caaJ', 'scsi-0QEMU_QEMU_HARDDISK_0693774e-e893-4e7b-949f-071f2326db51', 'scsi-SQEMU_QEMU_HARDDISK_0693774e-e893-4e7b-949f-071f2326db51'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0693774e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2f9a42f5--b575--5e11--9555--a5550e2fae1e-osd--block--2f9a42f5--b575--5e11--9555--a5550e2fae1e']}})  2026-02-16 06:23:39.298615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.298630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.423416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-22-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-16 06:23:39.423549 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:23:39.423594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.423612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-c3eDMW-CQjE-FL46-zd8q-XZ7h-Wvk7-L0nQAD', 'dm-uuid-CRYPT-LUKS2-011f269142c14738a165566bf449f017-c3eDMW-CQjE-FL46-zd8q-XZ7h-Wvk7-L0nQAD'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-16 06:23:39.423624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.423637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2f9a42f5--b575--5e11--9555--a5550e2fae1e-osd--block--2f9a42f5--b575--5e11--9555--a5550e2fae1e', 'dm-uuid-LVM-F4bqzAKmgcv4nzZjVJIDDLRdBkjdiY7Ac3eDMWCQjEFL46zd8qXZ7hWvk7L0nQAD'], 'uuids': ['011f2691-42c1-4738-a165-566bf449f017'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0693774e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['c3eDMW-CQjE-FL46-zd8q-XZ7h-Wvk7-L0nQAD']}})  2026-02-16 06:23:39.423652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UNvti2-beMu-mtun-nkoB-anD7-j3vD-BO56Wb', 'scsi-0QEMU_QEMU_HARDDISK_51f5f49d-415a-48de-982e-531dff143e5e', 'scsi-SQEMU_QEMU_HARDDISK_51f5f49d-415a-48de-982e-531dff143e5e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '51f5f49d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--50d7a967--e09e--512a--aa83--aa9bbdf9ab74-osd--block--50d7a967--e09e--512a--aa83--aa9bbdf9ab74']}})  2026-02-16 06:23:39.423665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.423698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.423721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2168da4d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part16', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part14', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part15', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part1', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 06:23:39.423746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--3ec6a818--dc71--5cb4--ac47--83f209d09bca-osd--block--3ec6a818--dc71--5cb4--ac47--83f209d09bca', 'dm-uuid-LVM-IKNT1aRSRRXmVnhjGHBWtObOyhGZoCrKxknn5549qE5Iv1X6exAA2Hq2RDcxdb2r'], 'uuids': ['5964190e-3947-423a-9774-0a2e895129b4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0857a7ec', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['xknn55-49qE-5Iv1-X6ex-AA2H-q2RD-cxdb2r']}})  2026-02-16 06:23:39.423758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.423778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57ea9400-2602-4802-b9b7-802a488f4705', 'scsi-SQEMU_QEMU_HARDDISK_57ea9400-2602-4802-b9b7-802a488f4705'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '57ea9400', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 06:23:39.626942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.627114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-W4T77R-WX0u-2wiK-0VwS-pHXw-eigq-78SyVp', 'scsi-0QEMU_QEMU_HARDDISK_769208b9-3be0-45fa-bf10-39ffe30cf829', 'scsi-SQEMU_QEMU_HARDDISK_769208b9-3be0-45fa-bf10-39ffe30cf829'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '769208b9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d-osd--block--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d']}})  2026-02-16 06:23:39.627134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-YuMeig-kCrx-OY0h-LAH1-gOwa-oo3b-Aqwsvb', 'dm-uuid-CRYPT-LUKS2-b3748582e35845b0b8aaf881226dc8da-YuMeig-kCrx-OY0h-LAH1-gOwa-oo3b-Aqwsvb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-16 06:23:39.627145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.627155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.627164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-26-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-16 06:23:39.627173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.627196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-qYYWm2-N1bk-S9UT-0Dip-02Aj-Kcu4-0awaVv', 'dm-uuid-CRYPT-LUKS2-7b6d91351d3c4adabcb6913cd16f15c7-qYYWm2-N1bk-S9UT-0Dip-02Aj-Kcu4-0awaVv'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-16 06:23:39.627212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.627226 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d-osd--block--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d', 'dm-uuid-LVM-sWHkNGoua6AD2gtW0aHfBT1ggS3B4VVdqYYWm2N1bkS9UT0Dip02AjKcu40awaVv'], 'uuids': ['7b6d9135-1d3c-4ada-bcb6-913cd16f15c7'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '769208b9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['qYYWm2-N1bk-S9UT-0Dip-02Aj-Kcu4-0awaVv']}})  2026-02-16 06:23:39.627237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ezeU5X-kiVi-Bwdm-EJU8-vTMX-Ty8v-7odRXz', 'scsi-0QEMU_QEMU_HARDDISK_0857a7ec-98e0-4b3a-95cc-40567a4f4a8e', 'scsi-SQEMU_QEMU_HARDDISK_0857a7ec-98e0-4b3a-95cc-40567a4f4a8e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0857a7ec', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--3ec6a818--dc71--5cb4--ac47--83f209d09bca-osd--block--3ec6a818--dc71--5cb4--ac47--83f209d09bca']}})  2026-02-16 06:23:39.627246 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:39.627257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.627277 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '66717551', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part16', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part14', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part15', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part1', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 06:23:39.689624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.689716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.689731 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-xknn55-49qE-5Iv1-X6ex-AA2H-q2RD-cxdb2r', 'dm-uuid-CRYPT-LUKS2-5964190e3947423a97740a2e895129b4-xknn55-49qE-5Iv1-X6ex-AA2H-q2RD-cxdb2r'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-16 06:23:39.689745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.689756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f418f421--cc32--53ce--b421--39353fe37c02-osd--block--f418f421--cc32--53ce--b421--39353fe37c02', 'dm-uuid-LVM-fuzYkTDOD1mzGPTtEVy3HIfkbUT8vrouEUngu6j9gDpOiJ09icmXLIesmhVGIdAG'], 'uuids': ['ec5126ba-6809-43bf-b597-f55a08b20d1f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '560fea90', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['EUngu6-j9gD-pOiJ-09ic-mXLI-esmh-VGIdAG']}})  2026-02-16 06:23:39.689768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22f5929b-f2f1-4a02-b80c-ace7dc1afd6d', 'scsi-SQEMU_QEMU_HARDDISK_22f5929b-f2f1-4a02-b80c-ace7dc1afd6d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '22f5929b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 06:23:39.689799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-z25UVR-mt7s-2TOu-f4Na-2m38-OcPQ-rSbkPq', 'scsi-0QEMU_QEMU_HARDDISK_864a7dfe-a330-4dca-9b66-49ca9e8841e5', 'scsi-SQEMU_QEMU_HARDDISK_864a7dfe-a330-4dca-9b66-49ca9e8841e5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '864a7dfe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--10a0662d--59e9--5a43--af5c--1b6d671b7fa5-osd--block--10a0662d--59e9--5a43--af5c--1b6d671b7fa5']}})  2026-02-16 06:23:39.689833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.689844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.689855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-30-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-16 06:23:39.689865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.689876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-dSLmrZ-CRVK-IRBO-jrNi-ck0K-roaJ-NYuYcA', 'dm-uuid-CRYPT-LUKS2-ad5c8a1c7cef458c9644d8140426285b-dSLmrZ-CRVK-IRBO-jrNi-ck0K-roaJ-NYuYcA'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-16 06:23:39.689886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.689896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--10a0662d--59e9--5a43--af5c--1b6d671b7fa5-osd--block--10a0662d--59e9--5a43--af5c--1b6d671b7fa5', 'dm-uuid-LVM-SWv31bXFKxTO3vyaMihj1WLbgzWvzkgjdSLmrZCRVKIRBOjrNick0KroaJNYuYcA'], 'uuids': ['ad5c8a1c-7cef-458c-9644-d8140426285b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '864a7dfe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['dSLmrZ-CRVK-IRBO-jrNi-ck0K-roaJ-NYuYcA']}})  2026-02-16 06:23:39.689926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Qrttlw-98AS-fQrI-yUr1-wyrI-2oj6-dafTom', 'scsi-0QEMU_QEMU_HARDDISK_560fea90-96fc-4e98-a264-4fc86723b569', 'scsi-SQEMU_QEMU_HARDDISK_560fea90-96fc-4e98-a264-4fc86723b569'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '560fea90', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f418f421--cc32--53ce--b421--39353fe37c02-osd--block--f418f421--cc32--53ce--b421--39353fe37c02']}})  2026-02-16 06:23:39.827579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.827682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f566252a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part16', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part14', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part15', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part1', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 06:23:39.827730 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:23:39.827744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.827757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.827783 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-EUngu6-j9gD-pOiJ-09ic-mXLI-esmh-VGIdAG', 'dm-uuid-CRYPT-LUKS2-ec5126ba680943bfb597f55a08b20d1f-EUngu6-j9gD-pOiJ-09ic-mXLI-esmh-VGIdAG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-16 06:23:39.827796 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:23:39.827826 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.827838 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.827850 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.827861 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-53-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-16 06:23:39.827873 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.827916 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.827928 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:39.827956 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f', 'scsi-SQEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f62a15e9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part16', 'scsi-SQEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part14', 'scsi-SQEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part15', 'scsi-SQEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part1', 'scsi-SQEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 06:23:40.444517 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:40.444607 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:23:40.444619 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:23:40.444651 | orchestrator | 2026-02-16 06:23:40.444661 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-16 06:23:40.444670 | orchestrator | Monday 16 February 2026 06:23:39 +0000 (0:00:01.186) 0:01:10.860 ******* 2026-02-16 06:23:40.444681 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.444691 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.444712 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.444722 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-26-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.444746 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.444755 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.444770 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.444785 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2335e156', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part16', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part14', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part15', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part1', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.444802 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.612662 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.612789 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:40.612807 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.612820 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.612832 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.612859 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.612871 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.612902 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.612923 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.612944 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd4296cc6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_d4296cc6-718f-4cad-a4ad-740e974bf2cd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.612959 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.612979 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.895988 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:23:40.896193 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.896219 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.896236 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.896259 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-28-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.896268 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.896276 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.896316 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.896332 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c7144733', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part16', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part14', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part15', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part1', 'scsi-SQEMU_QEMU_HARDDISK_c7144733-ae74-44fe-b24d-98a6f80ad4d8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.896342 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.896354 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:40.896361 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:23:40.896374 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.069760 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--50d7a967--e09e--512a--aa83--aa9bbdf9ab74-osd--block--50d7a967--e09e--512a--aa83--aa9bbdf9ab74', 'dm-uuid-LVM-2dhVtclKCjfsjMcDe2D03F1qrxXtffQzYuMeigkCrxOY0hLAH1gOwaoo3bAqwsvb'], 'uuids': ['b3748582-e358-45b0-b8aa-f881226dc8da'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '51f5f49d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['YuMeig-kCrx-OY0h-LAH1-gOwa-oo3b-Aqwsvb']}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.069847 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_843bc551-f5ad-4319-82ad-d411f9295fd2', 'scsi-SQEMU_QEMU_HARDDISK_843bc551-f5ad-4319-82ad-d411f9295fd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '843bc551', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.069855 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1ITxS0-SFz0-FdlF-VzSF-Uv8m-y10A-m0caaJ', 'scsi-0QEMU_QEMU_HARDDISK_0693774e-e893-4e7b-949f-071f2326db51', 'scsi-SQEMU_QEMU_HARDDISK_0693774e-e893-4e7b-949f-071f2326db51'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0693774e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2f9a42f5--b575--5e11--9555--a5550e2fae1e-osd--block--2f9a42f5--b575--5e11--9555--a5550e2fae1e']}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.069876 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.069881 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.069895 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-22-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.069912 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.069919 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-c3eDMW-CQjE-FL46-zd8q-XZ7h-Wvk7-L0nQAD', 'dm-uuid-CRYPT-LUKS2-011f269142c14738a165566bf449f017-c3eDMW-CQjE-FL46-zd8q-XZ7h-Wvk7-L0nQAD'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.069930 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.069940 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2f9a42f5--b575--5e11--9555--a5550e2fae1e-osd--block--2f9a42f5--b575--5e11--9555--a5550e2fae1e', 'dm-uuid-LVM-F4bqzAKmgcv4nzZjVJIDDLRdBkjdiY7Ac3eDMWCQjEFL46zd8qXZ7hWvk7L0nQAD'], 'uuids': ['011f2691-42c1-4738-a165-566bf449f017'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0693774e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['c3eDMW-CQjE-FL46-zd8q-XZ7h-Wvk7-L0nQAD']}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.069949 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UNvti2-beMu-mtun-nkoB-anD7-j3vD-BO56Wb', 'scsi-0QEMU_QEMU_HARDDISK_51f5f49d-415a-48de-982e-531dff143e5e', 'scsi-SQEMU_QEMU_HARDDISK_51f5f49d-415a-48de-982e-531dff143e5e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '51f5f49d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--50d7a967--e09e--512a--aa83--aa9bbdf9ab74-osd--block--50d7a967--e09e--512a--aa83--aa9bbdf9ab74']}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.219691 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.219805 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.219818 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--3ec6a818--dc71--5cb4--ac47--83f209d09bca-osd--block--3ec6a818--dc71--5cb4--ac47--83f209d09bca', 'dm-uuid-LVM-IKNT1aRSRRXmVnhjGHBWtObOyhGZoCrKxknn5549qE5Iv1X6exAA2Hq2RDcxdb2r'], 'uuids': ['5964190e-3947-423a-9774-0a2e895129b4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0857a7ec', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['xknn55-49qE-5Iv1-X6ex-AA2H-q2RD-cxdb2r']}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.219843 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57ea9400-2602-4802-b9b7-802a488f4705', 'scsi-SQEMU_QEMU_HARDDISK_57ea9400-2602-4802-b9b7-802a488f4705'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '57ea9400', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.219877 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2168da4d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part16', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part14', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part15', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part1', 'scsi-SQEMU_QEMU_HARDDISK_2168da4d-1d17-4015-90e6-e36c44513ae5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.219887 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.219900 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.219909 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-YuMeig-kCrx-OY0h-LAH1-gOwa-oo3b-Aqwsvb', 'dm-uuid-CRYPT-LUKS2-b3748582e35845b0b8aaf881226dc8da-YuMeig-kCrx-OY0h-LAH1-gOwa-oo3b-Aqwsvb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.219923 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-W4T77R-WX0u-2wiK-0VwS-pHXw-eigq-78SyVp', 'scsi-0QEMU_QEMU_HARDDISK_769208b9-3be0-45fa-bf10-39ffe30cf829', 'scsi-SQEMU_QEMU_HARDDISK_769208b9-3be0-45fa-bf10-39ffe30cf829'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '769208b9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d-osd--block--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d']}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.271381 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.271497 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.271512 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-26-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.271545 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.271557 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-qYYWm2-N1bk-S9UT-0Dip-02Aj-Kcu4-0awaVv', 'dm-uuid-CRYPT-LUKS2-7b6d91351d3c4adabcb6913cd16f15c7-qYYWm2-N1bk-S9UT-0Dip-02Aj-Kcu4-0awaVv'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.271569 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.271599 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d-osd--block--ee7b6e88--c83f--5dc8--a180--0e3d5e3fc99d', 'dm-uuid-LVM-sWHkNGoua6AD2gtW0aHfBT1ggS3B4VVdqYYWm2N1bkS9UT0Dip02AjKcu40awaVv'], 'uuids': ['7b6d9135-1d3c-4ada-bcb6-913cd16f15c7'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '769208b9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['qYYWm2-N1bk-S9UT-0Dip-02Aj-Kcu4-0awaVv']}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.271623 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.271642 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ezeU5X-kiVi-Bwdm-EJU8-vTMX-Ty8v-7odRXz', 'scsi-0QEMU_QEMU_HARDDISK_0857a7ec-98e0-4b3a-95cc-40567a4f4a8e', 'scsi-SQEMU_QEMU_HARDDISK_0857a7ec-98e0-4b3a-95cc-40567a4f4a8e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0857a7ec', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--3ec6a818--dc71--5cb4--ac47--83f209d09bca-osd--block--3ec6a818--dc71--5cb4--ac47--83f209d09bca']}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.271657 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f418f421--cc32--53ce--b421--39353fe37c02-osd--block--f418f421--cc32--53ce--b421--39353fe37c02', 'dm-uuid-LVM-fuzYkTDOD1mzGPTtEVy3HIfkbUT8vrouEUngu6j9gDpOiJ09icmXLIesmhVGIdAG'], 'uuids': ['ec5126ba-6809-43bf-b597-f55a08b20d1f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '560fea90', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['EUngu6-j9gD-pOiJ-09ic-mXLI-esmh-VGIdAG']}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.271669 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.271688 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22f5929b-f2f1-4a02-b80c-ace7dc1afd6d', 'scsi-SQEMU_QEMU_HARDDISK_22f5929b-f2f1-4a02-b80c-ace7dc1afd6d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '22f5929b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.306739 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '66717551', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part16', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part14', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part15', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part1', 'scsi-SQEMU_QEMU_HARDDISK_66717551-de8b-4214-b5a8-5208e0aa8d29-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.306871 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-z25UVR-mt7s-2TOu-f4Na-2m38-OcPQ-rSbkPq', 'scsi-0QEMU_QEMU_HARDDISK_864a7dfe-a330-4dca-9b66-49ca9e8841e5', 'scsi-SQEMU_QEMU_HARDDISK_864a7dfe-a330-4dca-9b66-49ca9e8841e5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '864a7dfe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--10a0662d--59e9--5a43--af5c--1b6d671b7fa5-osd--block--10a0662d--59e9--5a43--af5c--1b6d671b7fa5']}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.306891 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.306934 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.306944 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.306962 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.306972 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-xknn55-49qE-5Iv1-X6ex-AA2H-q2RD-cxdb2r', 'dm-uuid-CRYPT-LUKS2-5964190e3947423a97740a2e895129b4-xknn55-49qE-5Iv1-X6ex-AA2H-q2RD-cxdb2r'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.306982 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-30-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.306992 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.307018 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-dSLmrZ-CRVK-IRBO-jrNi-ck0K-roaJ-NYuYcA', 'dm-uuid-CRYPT-LUKS2-ad5c8a1c7cef458c9644d8140426285b-dSLmrZ-CRVK-IRBO-jrNi-ck0K-roaJ-NYuYcA'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.379585 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.379652 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:41.379661 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--10a0662d--59e9--5a43--af5c--1b6d671b7fa5-osd--block--10a0662d--59e9--5a43--af5c--1b6d671b7fa5', 'dm-uuid-LVM-SWv31bXFKxTO3vyaMihj1WLbgzWvzkgjdSLmrZCRVKIRBOjrNick0KroaJNYuYcA'], 'uuids': ['ad5c8a1c-7cef-458c-9644-d8140426285b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '864a7dfe', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['dSLmrZ-CRVK-IRBO-jrNi-ck0K-roaJ-NYuYcA']}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.379666 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:23:41.379671 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Qrttlw-98AS-fQrI-yUr1-wyrI-2oj6-dafTom', 'scsi-0QEMU_QEMU_HARDDISK_560fea90-96fc-4e98-a264-4fc86723b569', 'scsi-SQEMU_QEMU_HARDDISK_560fea90-96fc-4e98-a264-4fc86723b569'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '560fea90', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f418f421--cc32--53ce--b421--39353fe37c02-osd--block--f418f421--cc32--53ce--b421--39353fe37c02']}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.379678 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.379709 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f566252a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part16', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part14', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part15', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part1', 'scsi-SQEMU_QEMU_HARDDISK_f566252a-854e-4a1e-9644-f4618e7e3b5d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.379739 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.379747 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.379754 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.379761 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:41.379782 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:45.223471 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-53-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:45.223593 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:45.223622 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-EUngu6-j9gD-pOiJ-09ic-mXLI-esmh-VGIdAG', 'dm-uuid-CRYPT-LUKS2-ec5126ba680943bfb597f55a08b20d1f-EUngu6-j9gD-pOiJ-09ic-mXLI-esmh-VGIdAG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:45.223640 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:45.223653 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:23:45.223667 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:45.223746 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f', 'scsi-SQEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f62a15e9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part16', 'scsi-SQEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part14', 'scsi-SQEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part15', 'scsi-SQEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part1', 'scsi-SQEMU_QEMU_HARDDISK_f62a15e9-7171-41ff-abc5-7047e911ab8f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:45.223763 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:45.223776 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:23:45.223787 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:23:45.223807 | orchestrator | 2026-02-16 06:23:45.223820 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-16 06:23:45.223832 | orchestrator | Monday 16 February 2026 06:23:41 +0000 (0:00:01.617) 0:01:12.478 ******* 2026-02-16 06:23:45.223843 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:23:45.223854 | orchestrator | ok: [testbed-node-1] 2026-02-16 06:23:45.223895 | orchestrator | ok: [testbed-node-2] 2026-02-16 06:23:45.223906 | orchestrator | ok: [testbed-node-3] 2026-02-16 06:23:45.223916 | orchestrator | ok: [testbed-node-4] 2026-02-16 06:23:45.223927 | orchestrator | ok: [testbed-node-5] 2026-02-16 06:23:45.223938 | orchestrator | ok: [testbed-manager] 2026-02-16 06:23:45.223948 | orchestrator | 2026-02-16 06:23:45.223959 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-16 06:23:45.223970 | orchestrator | Monday 16 February 2026 06:23:42 +0000 (0:00:01.410) 0:01:13.889 ******* 2026-02-16 06:23:45.223981 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:23:45.223994 | orchestrator | ok: [testbed-node-1] 2026-02-16 06:23:45.224006 | orchestrator | ok: [testbed-node-2] 2026-02-16 06:23:45.224017 | orchestrator | ok: [testbed-node-3] 2026-02-16 06:23:45.224063 | orchestrator | ok: [testbed-node-4] 2026-02-16 06:23:45.224077 | orchestrator | ok: [testbed-node-5] 2026-02-16 06:23:45.224094 | orchestrator | ok: [testbed-manager] 2026-02-16 06:23:45.224107 | orchestrator | 2026-02-16 06:23:45.224119 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-16 06:23:45.224131 | orchestrator | Monday 16 February 2026 06:23:43 +0000 (0:00:00.758) 0:01:14.647 ******* 2026-02-16 06:23:45.224143 | orchestrator | ok: [testbed-node-1] 2026-02-16 06:23:45.224155 | orchestrator | ok: [testbed-node-2] 2026-02-16 06:23:45.224167 | orchestrator | ok: [testbed-node-3] 2026-02-16 06:23:45.224184 | orchestrator | ok: [testbed-node-4] 2026-02-16 06:23:45.224203 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:23:45.224221 | orchestrator | ok: [testbed-node-5] 2026-02-16 06:23:45.224250 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:23:58.341537 | orchestrator | 2026-02-16 06:23:58.341646 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-16 06:23:58.341661 | orchestrator | Monday 16 February 2026 06:23:45 +0000 (0:00:01.502) 0:01:16.149 ******* 2026-02-16 06:23:58.341671 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:58.341683 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:23:58.341693 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:23:58.341703 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:58.341712 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:23:58.341722 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:23:58.341732 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:23:58.341741 | orchestrator | 2026-02-16 06:23:58.341751 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-16 06:23:58.341762 | orchestrator | Monday 16 February 2026 06:23:45 +0000 (0:00:00.776) 0:01:16.925 ******* 2026-02-16 06:23:58.341771 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:58.341781 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:23:58.341790 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:23:58.341800 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:58.341809 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:23:58.341819 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:23:58.341829 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-02-16 06:23:58.341839 | orchestrator | 2026-02-16 06:23:58.341848 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-16 06:23:58.341858 | orchestrator | Monday 16 February 2026 06:23:47 +0000 (0:00:01.653) 0:01:18.579 ******* 2026-02-16 06:23:58.341868 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:58.341877 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:23:58.341887 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:23:58.341897 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:58.341906 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:23:58.341942 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:23:58.341953 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:23:58.341963 | orchestrator | 2026-02-16 06:23:58.341973 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-16 06:23:58.341983 | orchestrator | Monday 16 February 2026 06:23:48 +0000 (0:00:00.797) 0:01:19.376 ******* 2026-02-16 06:23:58.341993 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 06:23:58.342003 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-16 06:23:58.342012 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-16 06:23:58.342156 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-16 06:23:58.342169 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-16 06:23:58.342180 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-16 06:23:58.342192 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-16 06:23:58.342202 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-16 06:23:58.342213 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-16 06:23:58.342224 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-16 06:23:58.342235 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-16 06:23:58.342246 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-16 06:23:58.342256 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-16 06:23:58.342268 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-16 06:23:58.342281 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-16 06:23:58.342297 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-16 06:23:58.342313 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-16 06:23:58.342328 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-16 06:23:58.342344 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-16 06:23:58.342362 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-16 06:23:58.342378 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-16 06:23:58.342395 | orchestrator | 2026-02-16 06:23:58.342409 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-16 06:23:58.342421 | orchestrator | Monday 16 February 2026 06:23:50 +0000 (0:00:01.911) 0:01:21.287 ******* 2026-02-16 06:23:58.342431 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-16 06:23:58.342440 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-16 06:23:58.342449 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-16 06:23:58.342459 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:58.342468 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-16 06:23:58.342493 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-16 06:23:58.342513 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-16 06:23:58.342523 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:23:58.342533 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-16 06:23:58.342542 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-16 06:23:58.342551 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-16 06:23:58.342561 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:23:58.342570 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-16 06:23:58.342579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-16 06:23:58.342604 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-16 06:23:58.342614 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:58.342624 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-16 06:23:58.342633 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-16 06:23:58.342643 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-16 06:23:58.342663 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:23:58.342673 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-16 06:23:58.342701 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-16 06:23:58.342712 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-16 06:23:58.342721 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:23:58.342731 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-16 06:23:58.342741 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-16 06:23:58.342750 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-16 06:23:58.342760 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:23:58.342769 | orchestrator | 2026-02-16 06:23:58.342778 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-16 06:23:58.342788 | orchestrator | Monday 16 February 2026 06:23:51 +0000 (0:00:01.304) 0:01:22.592 ******* 2026-02-16 06:23:58.342797 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:23:58.342807 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:23:58.342816 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:23:58.342825 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:23:58.342836 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 06:23:58.342845 | orchestrator | 2026-02-16 06:23:58.342856 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-16 06:23:58.342868 | orchestrator | Monday 16 February 2026 06:23:52 +0000 (0:00:01.038) 0:01:23.631 ******* 2026-02-16 06:23:58.342878 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:58.342889 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:23:58.342900 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:23:58.342910 | orchestrator | 2026-02-16 06:23:58.342921 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-16 06:23:58.342931 | orchestrator | Monday 16 February 2026 06:23:53 +0000 (0:00:00.597) 0:01:24.228 ******* 2026-02-16 06:23:58.342942 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:58.342953 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:23:58.342963 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:23:58.342974 | orchestrator | 2026-02-16 06:23:58.342984 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-16 06:23:58.342995 | orchestrator | Monday 16 February 2026 06:23:53 +0000 (0:00:00.343) 0:01:24.572 ******* 2026-02-16 06:23:58.343006 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:58.343016 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:23:58.343058 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:23:58.343073 | orchestrator | 2026-02-16 06:23:58.343084 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-16 06:23:58.343095 | orchestrator | Monday 16 February 2026 06:23:53 +0000 (0:00:00.355) 0:01:24.927 ******* 2026-02-16 06:23:58.343106 | orchestrator | ok: [testbed-node-3] 2026-02-16 06:23:58.343117 | orchestrator | ok: [testbed-node-4] 2026-02-16 06:23:58.343127 | orchestrator | ok: [testbed-node-5] 2026-02-16 06:23:58.343138 | orchestrator | 2026-02-16 06:23:58.343149 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-16 06:23:58.343160 | orchestrator | Monday 16 February 2026 06:23:54 +0000 (0:00:00.415) 0:01:25.343 ******* 2026-02-16 06:23:58.343170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 06:23:58.343181 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 06:23:58.343192 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 06:23:58.343202 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:58.343213 | orchestrator | 2026-02-16 06:23:58.343224 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-16 06:23:58.343234 | orchestrator | Monday 16 February 2026 06:23:54 +0000 (0:00:00.412) 0:01:25.756 ******* 2026-02-16 06:23:58.343254 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 06:23:58.343265 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 06:23:58.343276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 06:23:58.343286 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:58.343297 | orchestrator | 2026-02-16 06:23:58.343308 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-16 06:23:58.343318 | orchestrator | Monday 16 February 2026 06:23:55 +0000 (0:00:00.813) 0:01:26.569 ******* 2026-02-16 06:23:58.343329 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-16 06:23:58.343340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-16 06:23:58.343350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-16 06:23:58.343361 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:23:58.343372 | orchestrator | 2026-02-16 06:23:58.343382 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-16 06:23:58.343393 | orchestrator | Monday 16 February 2026 06:23:56 +0000 (0:00:00.762) 0:01:27.331 ******* 2026-02-16 06:23:58.343403 | orchestrator | ok: [testbed-node-3] 2026-02-16 06:23:58.343414 | orchestrator | ok: [testbed-node-4] 2026-02-16 06:23:58.343425 | orchestrator | ok: [testbed-node-5] 2026-02-16 06:23:58.343435 | orchestrator | 2026-02-16 06:23:58.343446 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-16 06:23:58.343457 | orchestrator | Monday 16 February 2026 06:23:56 +0000 (0:00:00.565) 0:01:27.896 ******* 2026-02-16 06:23:58.343468 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-16 06:23:58.343478 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-16 06:23:58.343495 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-16 06:23:58.343505 | orchestrator | 2026-02-16 06:23:58.343516 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-16 06:23:58.343527 | orchestrator | Monday 16 February 2026 06:23:57 +0000 (0:00:00.556) 0:01:28.452 ******* 2026-02-16 06:23:58.343537 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 06:23:58.343548 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 06:23:58.343560 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 06:23:58.343593 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-16 06:24:28.643266 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-16 06:24:28.643383 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-16 06:24:28.643400 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-16 06:24:28.643412 | orchestrator | 2026-02-16 06:24:28.643425 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-16 06:24:28.643438 | orchestrator | Monday 16 February 2026 06:23:58 +0000 (0:00:00.814) 0:01:29.267 ******* 2026-02-16 06:24:28.643449 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 06:24:28.643460 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 06:24:28.643471 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 06:24:28.643483 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-16 06:24:28.643493 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-16 06:24:28.643504 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-16 06:24:28.643515 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-16 06:24:28.643526 | orchestrator | 2026-02-16 06:24:28.643537 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-02-16 06:24:28.643573 | orchestrator | Monday 16 February 2026 06:24:00 +0000 (0:00:02.193) 0:01:31.461 ******* 2026-02-16 06:24:28.643585 | orchestrator | changed: [testbed-node-3] 2026-02-16 06:24:28.643597 | orchestrator | changed: [testbed-node-4] 2026-02-16 06:24:28.643607 | orchestrator | changed: [testbed-node-5] 2026-02-16 06:24:28.643618 | orchestrator | changed: [testbed-manager] 2026-02-16 06:24:28.643628 | orchestrator | changed: [testbed-node-0] 2026-02-16 06:24:28.643639 | orchestrator | changed: [testbed-node-2] 2026-02-16 06:24:28.643649 | orchestrator | changed: [testbed-node-1] 2026-02-16 06:24:28.643660 | orchestrator | 2026-02-16 06:24:28.643671 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-02-16 06:24:28.643682 | orchestrator | Monday 16 February 2026 06:24:11 +0000 (0:00:10.698) 0:01:42.160 ******* 2026-02-16 06:24:28.643692 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:28.643703 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:28.643713 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:28.643724 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:28.643735 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:28.643745 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:28.643756 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:28.643766 | orchestrator | 2026-02-16 06:24:28.643777 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-02-16 06:24:28.643788 | orchestrator | Monday 16 February 2026 06:24:12 +0000 (0:00:00.945) 0:01:43.105 ******* 2026-02-16 06:24:28.643798 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:28.643809 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:28.643820 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:28.643830 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:28.643841 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:28.643852 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:28.643863 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:28.643873 | orchestrator | 2026-02-16 06:24:28.643884 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-02-16 06:24:28.643895 | orchestrator | Monday 16 February 2026 06:24:12 +0000 (0:00:00.721) 0:01:43.826 ******* 2026-02-16 06:24:28.643906 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:28.643916 | orchestrator | ok: [testbed-node-2] 2026-02-16 06:24:28.643927 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:24:28.643938 | orchestrator | ok: [testbed-node-1] 2026-02-16 06:24:28.643949 | orchestrator | ok: [testbed-node-3] 2026-02-16 06:24:28.643959 | orchestrator | ok: [testbed-node-4] 2026-02-16 06:24:28.643970 | orchestrator | ok: [testbed-node-5] 2026-02-16 06:24:28.643980 | orchestrator | 2026-02-16 06:24:28.643991 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-02-16 06:24:28.644002 | orchestrator | Monday 16 February 2026 06:24:15 +0000 (0:00:02.224) 0:01:46.051 ******* 2026-02-16 06:24:28.644014 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-16 06:24:28.644050 | orchestrator | 2026-02-16 06:24:28.644061 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-02-16 06:24:28.644072 | orchestrator | Monday 16 February 2026 06:24:17 +0000 (0:00:02.120) 0:01:48.171 ******* 2026-02-16 06:24:28.644083 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:28.644094 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:28.644104 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:28.644115 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:28.644126 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:28.644136 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:28.644147 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:28.644158 | orchestrator | 2026-02-16 06:24:28.644168 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-02-16 06:24:28.644204 | orchestrator | Monday 16 February 2026 06:24:18 +0000 (0:00:00.788) 0:01:48.960 ******* 2026-02-16 06:24:28.644215 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:28.644226 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:28.644236 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:28.644247 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:28.644258 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:28.644268 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:28.644279 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:28.644290 | orchestrator | 2026-02-16 06:24:28.644300 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-02-16 06:24:28.644311 | orchestrator | Monday 16 February 2026 06:24:19 +0000 (0:00:01.087) 0:01:50.047 ******* 2026-02-16 06:24:28.644339 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:28.644351 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:28.644362 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:28.644373 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:28.644383 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:28.644394 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:28.644405 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:28.644415 | orchestrator | 2026-02-16 06:24:28.644426 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-02-16 06:24:28.644437 | orchestrator | Monday 16 February 2026 06:24:19 +0000 (0:00:00.795) 0:01:50.843 ******* 2026-02-16 06:24:28.644448 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:28.644458 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:28.644469 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:28.644479 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:28.644490 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:28.644501 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:28.644511 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:28.644522 | orchestrator | 2026-02-16 06:24:28.644533 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-02-16 06:24:28.644543 | orchestrator | Monday 16 February 2026 06:24:21 +0000 (0:00:01.211) 0:01:52.054 ******* 2026-02-16 06:24:28.644554 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:28.644564 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:28.644575 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:28.644585 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:28.644596 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:28.644606 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:28.644617 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:28.644627 | orchestrator | 2026-02-16 06:24:28.644638 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-02-16 06:24:28.644649 | orchestrator | Monday 16 February 2026 06:24:21 +0000 (0:00:00.794) 0:01:52.849 ******* 2026-02-16 06:24:28.644660 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:28.644670 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:28.644681 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:28.644691 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:28.644702 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:28.644712 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:28.644723 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:28.644733 | orchestrator | 2026-02-16 06:24:28.644744 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-02-16 06:24:28.644755 | orchestrator | Monday 16 February 2026 06:24:23 +0000 (0:00:01.134) 0:01:53.984 ******* 2026-02-16 06:24:28.644766 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:28.644777 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:28.644787 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:28.644798 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:28.644808 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:28.644819 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:28.644836 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:28.644847 | orchestrator | 2026-02-16 06:24:28.644858 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-02-16 06:24:28.644869 | orchestrator | Monday 16 February 2026 06:24:23 +0000 (0:00:00.771) 0:01:54.755 ******* 2026-02-16 06:24:28.644880 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:28.644891 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:28.644901 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:28.644912 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:28.644922 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:28.644933 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:28.644943 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:28.644954 | orchestrator | 2026-02-16 06:24:28.644965 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-02-16 06:24:28.644975 | orchestrator | Monday 16 February 2026 06:24:24 +0000 (0:00:01.024) 0:01:55.779 ******* 2026-02-16 06:24:28.644986 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:28.644997 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:28.645007 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:28.645048 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:28.645061 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:28.645071 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:28.645082 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:28.645093 | orchestrator | 2026-02-16 06:24:28.645103 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-02-16 06:24:28.645114 | orchestrator | Monday 16 February 2026 06:24:26 +0000 (0:00:01.166) 0:01:56.945 ******* 2026-02-16 06:24:28.645125 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:28.645136 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:28.645146 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:28.645157 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:28.645167 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:28.645178 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:28.645188 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:28.645199 | orchestrator | 2026-02-16 06:24:28.645210 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-02-16 06:24:28.645221 | orchestrator | Monday 16 February 2026 06:24:26 +0000 (0:00:00.761) 0:01:57.707 ******* 2026-02-16 06:24:28.645231 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:28.645242 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:28.645252 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:28.645269 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:28.645280 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:28.645290 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:28.645301 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:28.645312 | orchestrator | 2026-02-16 06:24:28.645322 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-02-16 06:24:28.645333 | orchestrator | Monday 16 February 2026 06:24:27 +0000 (0:00:01.053) 0:01:58.760 ******* 2026-02-16 06:24:28.645344 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:28.645354 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:28.645365 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:28.645376 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:28.645386 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:28.645404 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:38.452461 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:38.452569 | orchestrator | 2026-02-16 06:24:38.452583 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-02-16 06:24:38.452593 | orchestrator | Monday 16 February 2026 06:24:28 +0000 (0:00:00.810) 0:01:59.571 ******* 2026-02-16 06:24:38.452601 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:38.452609 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:38.452641 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:38.452649 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 06:24:38.452659 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 06:24:38.452666 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:38.452673 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 06:24:38.452681 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 06:24:38.452688 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:38.452695 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 06:24:38.452702 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 06:24:38.452709 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:38.452716 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:38.452723 | orchestrator | 2026-02-16 06:24:38.452731 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-02-16 06:24:38.452738 | orchestrator | Monday 16 February 2026 06:24:29 +0000 (0:00:01.116) 0:02:00.688 ******* 2026-02-16 06:24:38.452745 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:38.452753 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:38.452759 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:38.452766 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:38.452773 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:38.452780 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:38.452787 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:38.452794 | orchestrator | 2026-02-16 06:24:38.452802 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-02-16 06:24:38.452809 | orchestrator | Monday 16 February 2026 06:24:30 +0000 (0:00:00.827) 0:02:01.515 ******* 2026-02-16 06:24:38.452816 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:38.452823 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:38.452830 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:38.452837 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:38.452845 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:38.452852 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:38.452859 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:38.452866 | orchestrator | 2026-02-16 06:24:38.452873 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-02-16 06:24:38.452880 | orchestrator | Monday 16 February 2026 06:24:31 +0000 (0:00:01.113) 0:02:02.629 ******* 2026-02-16 06:24:38.452887 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:38.452894 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:38.452901 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:38.452908 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:38.452916 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:38.452923 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:38.452930 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:38.452937 | orchestrator | 2026-02-16 06:24:38.452944 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-02-16 06:24:38.452951 | orchestrator | Monday 16 February 2026 06:24:32 +0000 (0:00:00.765) 0:02:03.394 ******* 2026-02-16 06:24:38.452958 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:38.452966 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:38.452973 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:38.452986 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:38.452995 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:38.453003 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:38.453011 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:38.453045 | orchestrator | 2026-02-16 06:24:38.453058 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-02-16 06:24:38.453067 | orchestrator | Monday 16 February 2026 06:24:33 +0000 (0:00:01.072) 0:02:04.466 ******* 2026-02-16 06:24:38.453075 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:38.453083 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:38.453091 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:38.453099 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:38.453107 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:38.453115 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:38.453135 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:38.453144 | orchestrator | 2026-02-16 06:24:38.453152 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-02-16 06:24:38.453161 | orchestrator | Monday 16 February 2026 06:24:34 +0000 (0:00:01.037) 0:02:05.504 ******* 2026-02-16 06:24:38.453169 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:38.453177 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:38.453185 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:38.453193 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:38.453202 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:38.453210 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:38.453218 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:38.453226 | orchestrator | 2026-02-16 06:24:38.453250 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-02-16 06:24:38.453258 | orchestrator | Monday 16 February 2026 06:24:35 +0000 (0:00:00.781) 0:02:06.285 ******* 2026-02-16 06:24:38.453265 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:38.453272 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:38.453279 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:38.453287 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:38.453294 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 06:24:38.453302 | orchestrator | 2026-02-16 06:24:38.453309 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-02-16 06:24:38.453316 | orchestrator | Monday 16 February 2026 06:24:37 +0000 (0:00:01.668) 0:02:07.953 ******* 2026-02-16 06:24:38.453323 | orchestrator | ok: [testbed-node-3] 2026-02-16 06:24:38.453331 | orchestrator | ok: [testbed-node-4] 2026-02-16 06:24:38.453338 | orchestrator | ok: [testbed-node-5] 2026-02-16 06:24:38.453345 | orchestrator | 2026-02-16 06:24:38.453352 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-02-16 06:24:38.453359 | orchestrator | Monday 16 February 2026 06:24:37 +0000 (0:00:00.397) 0:02:08.351 ******* 2026-02-16 06:24:38.453367 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 06:24:38.453374 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 06:24:38.453381 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:38.453388 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 06:24:38.453396 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 06:24:38.453403 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:38.453410 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 06:24:38.453424 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 06:24:38.453431 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:38.453438 | orchestrator | 2026-02-16 06:24:38.453446 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-02-16 06:24:38.453453 | orchestrator | Monday 16 February 2026 06:24:37 +0000 (0:00:00.375) 0:02:08.726 ******* 2026-02-16 06:24:38.453462 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:38.453471 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:38.453479 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:38.453486 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:38.453493 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:38.453501 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:38.453512 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:38.453525 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:41.681169 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:41.681330 | orchestrator | 2026-02-16 06:24:41.681360 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-02-16 06:24:41.681384 | orchestrator | Monday 16 February 2026 06:24:38 +0000 (0:00:00.654) 0:02:09.380 ******* 2026-02-16 06:24:41.681406 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:41.681425 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:41.681444 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:41.681463 | orchestrator | 2026-02-16 06:24:41.681484 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-02-16 06:24:41.681503 | orchestrator | Monday 16 February 2026 06:24:38 +0000 (0:00:00.341) 0:02:09.722 ******* 2026-02-16 06:24:41.681522 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:41.681540 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:41.681558 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:41.681577 | orchestrator | 2026-02-16 06:24:41.681598 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-02-16 06:24:41.681618 | orchestrator | Monday 16 February 2026 06:24:39 +0000 (0:00:00.339) 0:02:10.062 ******* 2026-02-16 06:24:41.681675 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:41.681696 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:41.681716 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:41.681736 | orchestrator | 2026-02-16 06:24:41.681755 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-02-16 06:24:41.681774 | orchestrator | Monday 16 February 2026 06:24:39 +0000 (0:00:00.305) 0:02:10.368 ******* 2026-02-16 06:24:41.681794 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:41.681814 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:41.681832 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:41.681852 | orchestrator | 2026-02-16 06:24:41.681871 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-02-16 06:24:41.681889 | orchestrator | Monday 16 February 2026 06:24:39 +0000 (0:00:00.323) 0:02:10.691 ******* 2026-02-16 06:24:41.681908 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'}) 2026-02-16 06:24:41.681929 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'}) 2026-02-16 06:24:41.681948 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'}) 2026-02-16 06:24:41.681966 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'}) 2026-02-16 06:24:41.681986 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'}) 2026-02-16 06:24:41.682005 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'}) 2026-02-16 06:24:41.682218 | orchestrator | 2026-02-16 06:24:41.682246 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-02-16 06:24:41.682266 | orchestrator | Monday 16 February 2026 06:24:41 +0000 (0:00:01.482) 0:02:12.174 ******* 2026-02-16 06:24:41.682316 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e/osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1771213392.735571, 'mtime': 1771213392.7295709, 'ctime': 1771213392.7295709, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e/osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:41.682378 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74/osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1771213411.2998898, 'mtime': 1771213411.2948897, 'ctime': 1771213411.2948897, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74/osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:41.682422 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:41.682444 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d/osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 950, 'dev': 6, 'nlink': 1, 'atime': 1771213393.155403, 'mtime': 1771213393.1464028, 'ctime': 1771213393.1464028, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d/osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:41.682475 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca/osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 960, 'dev': 6, 'nlink': 1, 'atime': 1771213411.9697268, 'mtime': 1771213411.9627266, 'ctime': 1771213411.9627266, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca/osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:41.682497 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:41.682533 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5/osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1771213393.2782385, 'mtime': 1771213393.2732384, 'ctime': 1771213393.2732384, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5/osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:43.343070 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-f418f421-cc32-53ce-b421-39353fe37c02/osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1771213412.6825683, 'mtime': 1771213412.6795683, 'ctime': 1771213412.6795683, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-f418f421-cc32-53ce-b421-39353fe37c02/osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:43.343180 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:43.343199 | orchestrator | 2026-02-16 06:24:43.343213 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-02-16 06:24:43.343225 | orchestrator | Monday 16 February 2026 06:24:41 +0000 (0:00:00.437) 0:02:12.611 ******* 2026-02-16 06:24:43.343237 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 06:24:43.343250 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 06:24:43.343261 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:43.343271 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 06:24:43.343282 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 06:24:43.343293 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:43.343304 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 06:24:43.343315 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 06:24:43.343325 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:43.343336 | orchestrator | 2026-02-16 06:24:43.343364 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-02-16 06:24:43.343399 | orchestrator | Monday 16 February 2026 06:24:42 +0000 (0:00:00.354) 0:02:12.966 ******* 2026-02-16 06:24:43.343413 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:43.343426 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:43.343437 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:43.343448 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:43.343476 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:43.343489 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:43.343500 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:43.343511 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:43.343522 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:43.343532 | orchestrator | 2026-02-16 06:24:43.343543 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-02-16 06:24:43.343556 | orchestrator | Monday 16 February 2026 06:24:42 +0000 (0:00:00.363) 0:02:13.330 ******* 2026-02-16 06:24:43.343569 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'})  2026-02-16 06:24:43.343581 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'})  2026-02-16 06:24:43.343593 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:43.343605 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'})  2026-02-16 06:24:43.343618 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'})  2026-02-16 06:24:43.343631 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:43.343642 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'})  2026-02-16 06:24:43.343655 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'})  2026-02-16 06:24:43.343667 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:43.343686 | orchestrator | 2026-02-16 06:24:43.343699 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-02-16 06:24:43.343711 | orchestrator | Monday 16 February 2026 06:24:42 +0000 (0:00:00.570) 0:02:13.900 ******* 2026-02-16 06:24:43.343724 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-2f9a42f5-b575-5e11-9555-a5550e2fae1e', 'data_vg': 'ceph-2f9a42f5-b575-5e11-9555-a5550e2fae1e'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:43.343742 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-50d7a967-e09e-512a-aa83-aa9bbdf9ab74', 'data_vg': 'ceph-50d7a967-e09e-512a-aa83-aa9bbdf9ab74'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:43.343756 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:43.343769 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d', 'data_vg': 'ceph-ee7b6e88-c83f-5dc8-a180-0e3d5e3fc99d'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:43.343780 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-3ec6a818-dc71-5cb4-ac47-83f209d09bca', 'data_vg': 'ceph-3ec6a818-dc71-5cb4-ac47-83f209d09bca'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:43.343791 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:43.343802 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-10a0662d-59e9-5a43-af5c-1b6d671b7fa5', 'data_vg': 'ceph-10a0662d-59e9-5a43-af5c-1b6d671b7fa5'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:43.343820 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-f418f421-cc32-53ce-b421-39353fe37c02', 'data_vg': 'ceph-f418f421-cc32-53ce-b421-39353fe37c02'}, 'ansible_loop_var': 'item'})  2026-02-16 06:24:47.683605 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:47.683687 | orchestrator | 2026-02-16 06:24:47.683700 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-02-16 06:24:47.683709 | orchestrator | Monday 16 February 2026 06:24:43 +0000 (0:00:00.373) 0:02:14.273 ******* 2026-02-16 06:24:47.683717 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:47.683724 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:47.683730 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:47.683737 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:47.683744 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:47.683752 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:47.683759 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:47.683766 | orchestrator | 2026-02-16 06:24:47.683773 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-02-16 06:24:47.683781 | orchestrator | Monday 16 February 2026 06:24:44 +0000 (0:00:00.720) 0:02:14.994 ******* 2026-02-16 06:24:47.683789 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:47.683796 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:47.683804 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:47.683813 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:47.683821 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-16 06:24:47.683828 | orchestrator | 2026-02-16 06:24:47.683836 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-02-16 06:24:47.683867 | orchestrator | Monday 16 February 2026 06:24:45 +0000 (0:00:01.628) 0:02:16.623 ******* 2026-02-16 06:24:47.683873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.683880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.683885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.683890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.683894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.683899 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:47.683903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.683908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.683912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.683917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.683921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.683938 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:47.683943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.683947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.683952 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.683956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.683961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.683976 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:47.683981 | orchestrator | 2026-02-16 06:24:47.683985 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-02-16 06:24:47.683990 | orchestrator | Monday 16 February 2026 06:24:46 +0000 (0:00:00.444) 0:02:17.067 ******* 2026-02-16 06:24:47.684002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684099 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:47.684108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684119 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684137 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:47.684142 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684166 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:47.684171 | orchestrator | 2026-02-16 06:24:47.684176 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-02-16 06:24:47.684181 | orchestrator | Monday 16 February 2026 06:24:46 +0000 (0:00:00.696) 0:02:17.764 ******* 2026-02-16 06:24:47.684186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684191 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684212 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:47.684222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684227 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684238 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684248 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:47.684253 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-16 06:24:47.684283 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:47.684288 | orchestrator | 2026-02-16 06:24:47.684293 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-02-16 06:24:47.684298 | orchestrator | Monday 16 February 2026 06:24:47 +0000 (0:00:00.454) 0:02:18.219 ******* 2026-02-16 06:24:47.684304 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:47.684309 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:47.684318 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:54.570381 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:54.570490 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:54.570504 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:54.570516 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:54.570527 | orchestrator | 2026-02-16 06:24:54.570539 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-02-16 06:24:54.570552 | orchestrator | Monday 16 February 2026 06:24:48 +0000 (0:00:00.751) 0:02:18.971 ******* 2026-02-16 06:24:54.570563 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:54.570573 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:54.570584 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:54.570594 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:54.570605 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:54.570616 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:54.570626 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:54.570637 | orchestrator | 2026-02-16 06:24:54.570648 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-02-16 06:24:54.570658 | orchestrator | Monday 16 February 2026 06:24:49 +0000 (0:00:00.973) 0:02:19.944 ******* 2026-02-16 06:24:54.570669 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:54.570680 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:54.570690 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:54.570701 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:54.570711 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:54.570722 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:54.570732 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:54.570743 | orchestrator | 2026-02-16 06:24:54.570754 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-02-16 06:24:54.570765 | orchestrator | Monday 16 February 2026 06:24:49 +0000 (0:00:00.767) 0:02:20.711 ******* 2026-02-16 06:24:54.570775 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:54.570786 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:54.570797 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:54.570807 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:54.570818 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:54.570828 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:54.570839 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:54.570849 | orchestrator | 2026-02-16 06:24:54.570860 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-02-16 06:24:54.570871 | orchestrator | Monday 16 February 2026 06:24:50 +0000 (0:00:01.101) 0:02:21.812 ******* 2026-02-16 06:24:54.570882 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:54.570892 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:54.570903 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:54.570915 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:54.570927 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:54.570940 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:54.570977 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:54.570990 | orchestrator | 2026-02-16 06:24:54.571003 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-02-16 06:24:54.571015 | orchestrator | Monday 16 February 2026 06:24:51 +0000 (0:00:01.118) 0:02:22.931 ******* 2026-02-16 06:24:54.571060 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:54.571073 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:54.571085 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:54.571097 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:54.571110 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:54.571123 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:54.571135 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:54.571147 | orchestrator | 2026-02-16 06:24:54.571158 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-02-16 06:24:54.571169 | orchestrator | Monday 16 February 2026 06:24:52 +0000 (0:00:00.809) 0:02:23.741 ******* 2026-02-16 06:24:54.571180 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:54.571190 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:54.571201 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:54.571212 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:54.571222 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:54.571233 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:54.571243 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:54.571254 | orchestrator | 2026-02-16 06:24:54.571265 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-02-16 06:24:54.571275 | orchestrator | Monday 16 February 2026 06:24:53 +0000 (0:00:01.046) 0:02:24.787 ******* 2026-02-16 06:24:54.571287 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 06:24:54.571299 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 06:24:54.571312 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 06:24:54.571324 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 06:24:54.571336 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 06:24:54.571349 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 06:24:54.571360 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:54.571388 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 06:24:54.571400 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 06:24:54.571411 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 06:24:54.571422 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 06:24:54.571432 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 06:24:54.571452 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 06:24:54.571463 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:54.571473 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 06:24:54.571484 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 06:24:54.571495 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 06:24:54.571505 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 06:24:54.571516 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 06:24:54.571567 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 06:24:54.571580 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:54.571591 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 06:24:54.571606 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 06:24:54.571617 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 06:24:54.571628 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 06:24:54.571638 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 06:24:54.571649 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 06:24:54.571659 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 06:24:54.571670 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 06:24:54.571681 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 06:24:54.571699 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 06:24:56.761312 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 06:24:56.761414 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:56.761453 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 06:24:56.761465 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 06:24:56.761474 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 06:24:56.761482 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 06:24:56.761491 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:56.761499 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 06:24:56.761507 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 06:24:56.761515 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 06:24:56.761523 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 06:24:56.761531 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 06:24:56.761539 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 06:24:56.761547 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 06:24:56.761556 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:56.761586 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 06:24:56.761597 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 06:24:56.761605 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:56.761613 | orchestrator | 2026-02-16 06:24:56.761624 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-02-16 06:24:56.761639 | orchestrator | Monday 16 February 2026 06:24:54 +0000 (0:00:00.989) 0:02:25.777 ******* 2026-02-16 06:24:56.761653 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:56.761668 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:56.761681 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:56.761689 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:24:56.761701 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:24:56.761715 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:24:56.761729 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:24:56.761737 | orchestrator | 2026-02-16 06:24:56.761746 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-02-16 06:24:56.761754 | orchestrator | Monday 16 February 2026 06:24:56 +0000 (0:00:01.206) 0:02:26.984 ******* 2026-02-16 06:24:56.761762 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 06:24:56.761776 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 06:24:56.761784 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 06:24:56.761808 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 06:24:56.761817 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 06:24:56.761825 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 06:24:56.761833 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:24:56.761847 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 06:24:56.761862 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 06:24:56.761873 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 06:24:56.761882 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 06:24:56.761891 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 06:24:56.761900 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 06:24:56.761909 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:24:56.761921 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 06:24:56.761936 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 06:24:56.761950 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 06:24:56.761964 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 06:24:56.761985 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 06:24:56.762000 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 06:24:56.762013 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:24:56.762113 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 06:24:56.762140 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 06:24:56.762155 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 06:24:56.762169 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 06:24:56.762183 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 06:24:56.762198 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 06:24:56.762212 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 06:24:56.762237 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 06:25:12.322260 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 06:25:12.322397 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 06:25:12.322425 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 06:25:12.322445 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 06:25:12.322463 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:25:12.322482 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 06:25:12.322502 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 06:25:12.322522 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 06:25:12.322538 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:25:12.322555 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-16 06:25:12.322572 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-16 06:25:12.322589 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-16 06:25:12.322606 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 06:25:12.322659 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-16 06:25:12.322698 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 06:25:12.322719 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 06:25:12.322739 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:25:12.322758 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-16 06:25:12.322776 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-16 06:25:12.322796 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:25:12.322815 | orchestrator | 2026-02-16 06:25:12.322838 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-02-16 06:25:12.322855 | orchestrator | Monday 16 February 2026 06:24:57 +0000 (0:00:00.983) 0:02:27.967 ******* 2026-02-16 06:25:12.322868 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:12.322880 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:25:12.322892 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:25:12.322906 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:25:12.322925 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:25:12.322942 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:25:12.322958 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:25:12.322976 | orchestrator | 2026-02-16 06:25:12.322994 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-02-16 06:25:12.323013 | orchestrator | Monday 16 February 2026 06:24:58 +0000 (0:00:01.116) 0:02:29.084 ******* 2026-02-16 06:25:12.323064 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:12.323084 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:25:12.323103 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:25:12.323118 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:25:12.323128 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:25:12.323139 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:25:12.323150 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:25:12.323161 | orchestrator | 2026-02-16 06:25:12.323172 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-02-16 06:25:12.323207 | orchestrator | Monday 16 February 2026 06:24:58 +0000 (0:00:00.772) 0:02:29.856 ******* 2026-02-16 06:25:12.323219 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:12.323229 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:25:12.323240 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:25:12.323251 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:25:12.323261 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:25:12.323272 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:25:12.323283 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:25:12.323293 | orchestrator | 2026-02-16 06:25:12.323310 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-16 06:25:12.323332 | orchestrator | Monday 16 February 2026 06:25:00 +0000 (0:00:01.814) 0:02:31.670 ******* 2026-02-16 06:25:12.323362 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-16 06:25:12.323382 | orchestrator | 2026-02-16 06:25:12.323401 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-02-16 06:25:12.323418 | orchestrator | Monday 16 February 2026 06:25:02 +0000 (0:00:01.932) 0:02:33.603 ******* 2026-02-16 06:25:12.323453 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-16 06:25:12.323470 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-16 06:25:12.323488 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-16 06:25:12.323505 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-16 06:25:12.323523 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-16 06:25:12.323541 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-16 06:25:12.323559 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-16 06:25:12.323577 | orchestrator | 2026-02-16 06:25:12.323595 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-02-16 06:25:12.323606 | orchestrator | Monday 16 February 2026 06:25:03 +0000 (0:00:00.880) 0:02:34.484 ******* 2026-02-16 06:25:12.323617 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:12.323628 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:25:12.323638 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:25:12.323649 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:25:12.323659 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:25:12.323670 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:25:12.323681 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:25:12.323691 | orchestrator | 2026-02-16 06:25:12.323702 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-02-16 06:25:12.323713 | orchestrator | Monday 16 February 2026 06:25:04 +0000 (0:00:01.007) 0:02:35.492 ******* 2026-02-16 06:25:12.323723 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:12.323734 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:25:12.323744 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:25:12.323755 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:25:12.323774 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:25:12.323785 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:25:12.323796 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:25:12.323807 | orchestrator | 2026-02-16 06:25:12.323817 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-02-16 06:25:12.323828 | orchestrator | Monday 16 February 2026 06:25:05 +0000 (0:00:00.797) 0:02:36.289 ******* 2026-02-16 06:25:12.323841 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:12.323860 | orchestrator | ok: [testbed-node-1] 2026-02-16 06:25:12.323888 | orchestrator | ok: [testbed-node-2] 2026-02-16 06:25:12.323907 | orchestrator | ok: [testbed-node-3] 2026-02-16 06:25:12.323924 | orchestrator | ok: [testbed-node-4] 2026-02-16 06:25:12.323942 | orchestrator | ok: [testbed-node-5] 2026-02-16 06:25:12.323961 | orchestrator | ok: [testbed-manager] 2026-02-16 06:25:12.323975 | orchestrator | 2026-02-16 06:25:12.323986 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-02-16 06:25:12.323997 | orchestrator | Monday 16 February 2026 06:25:06 +0000 (0:00:01.370) 0:02:37.660 ******* 2026-02-16 06:25:12.324007 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:12.324060 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:25:12.324076 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:25:12.324087 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:25:12.324097 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:25:12.324108 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:25:12.324118 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:25:12.324132 | orchestrator | 2026-02-16 06:25:12.324150 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-16 06:25:12.324165 | orchestrator | Monday 16 February 2026 06:25:08 +0000 (0:00:01.463) 0:02:39.123 ******* 2026-02-16 06:25:12.324180 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:12.324219 | orchestrator | skipping: [testbed-node-1] 2026-02-16 06:25:12.324241 | orchestrator | skipping: [testbed-node-2] 2026-02-16 06:25:12.324259 | orchestrator | skipping: [testbed-node-3] 2026-02-16 06:25:12.324275 | orchestrator | skipping: [testbed-node-4] 2026-02-16 06:25:12.324293 | orchestrator | skipping: [testbed-node-5] 2026-02-16 06:25:12.324312 | orchestrator | skipping: [testbed-manager] 2026-02-16 06:25:12.324329 | orchestrator | 2026-02-16 06:25:12.324421 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-02-16 06:25:12.324434 | orchestrator | Monday 16 February 2026 06:25:09 +0000 (0:00:01.480) 0:02:40.604 ******* 2026-02-16 06:25:12.324444 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:12.324455 | orchestrator | 2026-02-16 06:25:12.324466 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-02-16 06:25:12.324476 | orchestrator | Monday 16 February 2026 06:25:11 +0000 (0:00:01.804) 0:02:42.409 ******* 2026-02-16 06:25:12.324487 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:12.324498 | orchestrator | 2026-02-16 06:25:12.324524 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-02-16 06:25:31.291118 | orchestrator | 2026-02-16 06:25:31.291271 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-16 06:25:31.291303 | orchestrator | Monday 16 February 2026 06:25:12 +0000 (0:00:00.842) 0:02:43.252 ******* 2026-02-16 06:25:31.291325 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:31.291347 | orchestrator | 2026-02-16 06:25:31.291369 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-16 06:25:31.291391 | orchestrator | Monday 16 February 2026 06:25:12 +0000 (0:00:00.485) 0:02:43.737 ******* 2026-02-16 06:25:31.291412 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:31.291434 | orchestrator | 2026-02-16 06:25:31.291455 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-02-16 06:25:31.291475 | orchestrator | Monday 16 February 2026 06:25:13 +0000 (0:00:00.455) 0:02:44.193 ******* 2026-02-16 06:25:31.291498 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ad09daec448394448226c52ec1e81d37692336f3'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-16 06:25:31.291522 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ad09daec448394448226c52ec1e81d37692336f3'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-16 06:25:31.291545 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ad09daec448394448226c52ec1e81d37692336f3'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-16 06:25:31.291566 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ad09daec448394448226c52ec1e81d37692336f3'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-16 06:25:31.291609 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ad09daec448394448226c52ec1e81d37692336f3'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-16 06:25:31.291636 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ad09daec448394448226c52ec1e81d37692336f3'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__ad09daec448394448226c52ec1e81d37692336f3'}])  2026-02-16 06:25:31.291693 | orchestrator | 2026-02-16 06:25:31.291716 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-16 06:25:31.291739 | orchestrator | 2026-02-16 06:25:31.291760 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-16 06:25:31.291782 | orchestrator | Monday 16 February 2026 06:25:23 +0000 (0:00:10.327) 0:02:54.520 ******* 2026-02-16 06:25:31.291804 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:31.291824 | orchestrator | 2026-02-16 06:25:31.291846 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-16 06:25:31.291866 | orchestrator | Monday 16 February 2026 06:25:24 +0000 (0:00:00.488) 0:02:55.008 ******* 2026-02-16 06:25:31.291886 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:31.291907 | orchestrator | 2026-02-16 06:25:31.291927 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-16 06:25:31.291949 | orchestrator | Monday 16 February 2026 06:25:24 +0000 (0:00:00.144) 0:02:55.153 ******* 2026-02-16 06:25:31.291970 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:31.291992 | orchestrator | 2026-02-16 06:25:31.292012 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-16 06:25:31.292112 | orchestrator | Monday 16 February 2026 06:25:24 +0000 (0:00:00.129) 0:02:55.282 ******* 2026-02-16 06:25:31.292131 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:31.292149 | orchestrator | 2026-02-16 06:25:31.292168 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-16 06:25:31.292186 | orchestrator | Monday 16 February 2026 06:25:24 +0000 (0:00:00.151) 0:02:55.433 ******* 2026-02-16 06:25:31.292205 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-16 06:25:31.292224 | orchestrator | 2026-02-16 06:25:31.292243 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-16 06:25:31.292292 | orchestrator | Monday 16 February 2026 06:25:24 +0000 (0:00:00.223) 0:02:55.656 ******* 2026-02-16 06:25:31.292313 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:31.292333 | orchestrator | 2026-02-16 06:25:31.292354 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-16 06:25:31.292372 | orchestrator | Monday 16 February 2026 06:25:25 +0000 (0:00:00.490) 0:02:56.147 ******* 2026-02-16 06:25:31.292390 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:31.292410 | orchestrator | 2026-02-16 06:25:31.292430 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-16 06:25:31.292449 | orchestrator | Monday 16 February 2026 06:25:25 +0000 (0:00:00.172) 0:02:56.320 ******* 2026-02-16 06:25:31.292468 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:31.292487 | orchestrator | 2026-02-16 06:25:31.292505 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-16 06:25:31.292523 | orchestrator | Monday 16 February 2026 06:25:25 +0000 (0:00:00.508) 0:02:56.829 ******* 2026-02-16 06:25:31.292543 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:31.292561 | orchestrator | 2026-02-16 06:25:31.292579 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-16 06:25:31.292597 | orchestrator | Monday 16 February 2026 06:25:26 +0000 (0:00:00.384) 0:02:57.213 ******* 2026-02-16 06:25:31.292616 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:31.292634 | orchestrator | 2026-02-16 06:25:31.292653 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-16 06:25:31.292673 | orchestrator | Monday 16 February 2026 06:25:26 +0000 (0:00:00.137) 0:02:57.351 ******* 2026-02-16 06:25:31.292692 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:31.292711 | orchestrator | 2026-02-16 06:25:31.292748 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-16 06:25:31.292769 | orchestrator | Monday 16 February 2026 06:25:26 +0000 (0:00:00.156) 0:02:57.507 ******* 2026-02-16 06:25:31.292786 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:31.292804 | orchestrator | 2026-02-16 06:25:31.292822 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-16 06:25:31.292839 | orchestrator | Monday 16 February 2026 06:25:26 +0000 (0:00:00.145) 0:02:57.653 ******* 2026-02-16 06:25:31.292858 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:31.292875 | orchestrator | 2026-02-16 06:25:31.292894 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-16 06:25:31.292912 | orchestrator | Monday 16 February 2026 06:25:26 +0000 (0:00:00.139) 0:02:57.792 ******* 2026-02-16 06:25:31.292929 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 06:25:31.292946 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 06:25:31.292965 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 06:25:31.292982 | orchestrator | 2026-02-16 06:25:31.293000 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-16 06:25:31.293049 | orchestrator | Monday 16 February 2026 06:25:27 +0000 (0:00:00.617) 0:02:58.410 ******* 2026-02-16 06:25:31.293069 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:31.293088 | orchestrator | 2026-02-16 06:25:31.293106 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-16 06:25:31.293125 | orchestrator | Monday 16 February 2026 06:25:27 +0000 (0:00:00.255) 0:02:58.666 ******* 2026-02-16 06:25:31.293145 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 06:25:31.293178 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 06:25:31.293197 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 06:25:31.293214 | orchestrator | 2026-02-16 06:25:31.293231 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-16 06:25:31.293249 | orchestrator | Monday 16 February 2026 06:25:29 +0000 (0:00:02.066) 0:03:00.733 ******* 2026-02-16 06:25:31.293268 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-16 06:25:31.293287 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-16 06:25:31.293306 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-16 06:25:31.293325 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:31.293364 | orchestrator | 2026-02-16 06:25:31.293399 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-16 06:25:31.293420 | orchestrator | Monday 16 February 2026 06:25:30 +0000 (0:00:00.418) 0:03:01.152 ******* 2026-02-16 06:25:31.293441 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-16 06:25:31.293461 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-16 06:25:31.293479 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-16 06:25:31.293497 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:31.293517 | orchestrator | 2026-02-16 06:25:31.293537 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-16 06:25:31.293554 | orchestrator | Monday 16 February 2026 06:25:31 +0000 (0:00:00.910) 0:03:02.062 ******* 2026-02-16 06:25:31.293597 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-16 06:25:36.018921 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-16 06:25:36.019007 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-16 06:25:36.019059 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:36.019069 | orchestrator | 2026-02-16 06:25:36.019076 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-16 06:25:36.019084 | orchestrator | Monday 16 February 2026 06:25:31 +0000 (0:00:00.159) 0:03:02.222 ******* 2026-02-16 06:25:36.019093 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '94fb026fda3b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-16 06:25:28.268595', 'end': '2026-02-16 06:25:28.328447', 'delta': '0:00:00.059852', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['94fb026fda3b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-16 06:25:36.019115 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '8a5d26661ef8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-16 06:25:28.859231', 'end': '2026-02-16 06:25:28.921516', 'delta': '0:00:00.062285', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8a5d26661ef8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-16 06:25:36.019122 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '6720fcec1b21', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-16 06:25:29.590399', 'end': '2026-02-16 06:25:29.650797', 'delta': '0:00:00.060398', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6720fcec1b21'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-16 06:25:36.019129 | orchestrator | 2026-02-16 06:25:36.019135 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-16 06:25:36.019161 | orchestrator | Monday 16 February 2026 06:25:31 +0000 (0:00:00.189) 0:03:02.411 ******* 2026-02-16 06:25:36.019167 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:36.019174 | orchestrator | 2026-02-16 06:25:36.019181 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-16 06:25:36.019187 | orchestrator | Monday 16 February 2026 06:25:31 +0000 (0:00:00.260) 0:03:02.672 ******* 2026-02-16 06:25:36.019193 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:36.019199 | orchestrator | 2026-02-16 06:25:36.019205 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-16 06:25:36.019211 | orchestrator | Monday 16 February 2026 06:25:32 +0000 (0:00:00.815) 0:03:03.488 ******* 2026-02-16 06:25:36.019217 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:36.019224 | orchestrator | 2026-02-16 06:25:36.019230 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-16 06:25:36.019236 | orchestrator | Monday 16 February 2026 06:25:32 +0000 (0:00:00.152) 0:03:03.640 ******* 2026-02-16 06:25:36.019256 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-16 06:25:36.019263 | orchestrator | 2026-02-16 06:25:36.019269 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-16 06:25:36.019276 | orchestrator | Monday 16 February 2026 06:25:34 +0000 (0:00:01.391) 0:03:05.032 ******* 2026-02-16 06:25:36.019282 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:36.019288 | orchestrator | 2026-02-16 06:25:36.019294 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-16 06:25:36.019300 | orchestrator | Monday 16 February 2026 06:25:34 +0000 (0:00:00.146) 0:03:05.178 ******* 2026-02-16 06:25:36.019306 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:36.019313 | orchestrator | 2026-02-16 06:25:36.019319 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-16 06:25:36.019325 | orchestrator | Monday 16 February 2026 06:25:34 +0000 (0:00:00.119) 0:03:05.297 ******* 2026-02-16 06:25:36.019331 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:36.019337 | orchestrator | 2026-02-16 06:25:36.019343 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-16 06:25:36.019349 | orchestrator | Monday 16 February 2026 06:25:34 +0000 (0:00:00.234) 0:03:05.531 ******* 2026-02-16 06:25:36.019355 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:36.019362 | orchestrator | 2026-02-16 06:25:36.019368 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-16 06:25:36.019374 | orchestrator | Monday 16 February 2026 06:25:34 +0000 (0:00:00.136) 0:03:05.668 ******* 2026-02-16 06:25:36.019380 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:36.019394 | orchestrator | 2026-02-16 06:25:36.019401 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-16 06:25:36.019407 | orchestrator | Monday 16 February 2026 06:25:34 +0000 (0:00:00.140) 0:03:05.808 ******* 2026-02-16 06:25:36.019413 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:36.019420 | orchestrator | 2026-02-16 06:25:36.019426 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-16 06:25:36.019432 | orchestrator | Monday 16 February 2026 06:25:34 +0000 (0:00:00.128) 0:03:05.937 ******* 2026-02-16 06:25:36.019438 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:36.019444 | orchestrator | 2026-02-16 06:25:36.019450 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-16 06:25:36.019456 | orchestrator | Monday 16 February 2026 06:25:35 +0000 (0:00:00.134) 0:03:06.072 ******* 2026-02-16 06:25:36.019462 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:36.019468 | orchestrator | 2026-02-16 06:25:36.019475 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-16 06:25:36.019483 | orchestrator | Monday 16 February 2026 06:25:35 +0000 (0:00:00.140) 0:03:06.212 ******* 2026-02-16 06:25:36.019490 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:36.019497 | orchestrator | 2026-02-16 06:25:36.019504 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-16 06:25:36.019518 | orchestrator | Monday 16 February 2026 06:25:35 +0000 (0:00:00.135) 0:03:06.348 ******* 2026-02-16 06:25:36.019529 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:36.019537 | orchestrator | 2026-02-16 06:25:36.019544 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-16 06:25:36.019551 | orchestrator | Monday 16 February 2026 06:25:35 +0000 (0:00:00.127) 0:03:06.476 ******* 2026-02-16 06:25:36.019558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:25:36.019566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:25:36.019574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:25:36.019582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-26-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-16 06:25:36.019596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:25:36.246708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:25:36.246856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:25:36.246916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2335e156', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part16', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part14', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part15', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part1', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-16 06:25:36.246972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:25:36.246994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-16 06:25:36.247126 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:36.247159 | orchestrator | 2026-02-16 06:25:36.247182 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-16 06:25:36.247199 | orchestrator | Monday 16 February 2026 06:25:36 +0000 (0:00:00.473) 0:03:06.949 ******* 2026-02-16 06:25:36.247235 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:25:36.247252 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:25:36.247300 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:25:36.247324 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-16-02-25-26-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:25:36.247346 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:25:36.247368 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:25:36.247403 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:25:44.829733 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2335e156', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part16', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part14', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part15', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part1', 'scsi-SQEMU_QEMU_HARDDISK_2335e156-0c07-4cf9-917c-1a2f25b2fc27-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:25:44.829871 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:25:44.829896 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-16 06:25:44.829911 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:44.829926 | orchestrator | 2026-02-16 06:25:44.829942 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-16 06:25:44.829956 | orchestrator | Monday 16 February 2026 06:25:36 +0000 (0:00:00.222) 0:03:07.172 ******* 2026-02-16 06:25:44.829970 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:44.829984 | orchestrator | 2026-02-16 06:25:44.829997 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-16 06:25:44.830005 | orchestrator | Monday 16 February 2026 06:25:36 +0000 (0:00:00.540) 0:03:07.713 ******* 2026-02-16 06:25:44.830081 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:44.830092 | orchestrator | 2026-02-16 06:25:44.830100 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-16 06:25:44.830134 | orchestrator | Monday 16 February 2026 06:25:36 +0000 (0:00:00.131) 0:03:07.845 ******* 2026-02-16 06:25:44.830142 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:25:44.830150 | orchestrator | 2026-02-16 06:25:44.830158 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-16 06:25:44.830167 | orchestrator | Monday 16 February 2026 06:25:37 +0000 (0:00:00.501) 0:03:08.346 ******* 2026-02-16 06:25:44.830175 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:44.830183 | orchestrator | 2026-02-16 06:25:44.830191 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-16 06:25:44.830199 | orchestrator | Monday 16 February 2026 06:25:37 +0000 (0:00:00.130) 0:03:08.476 ******* 2026-02-16 06:25:44.830206 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:44.830292 | orchestrator | 2026-02-16 06:25:44.830303 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-16 06:25:44.830313 | orchestrator | Monday 16 February 2026 06:25:37 +0000 (0:00:00.238) 0:03:08.714 ******* 2026-02-16 06:25:44.830322 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:44.830331 | orchestrator | 2026-02-16 06:25:44.830341 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-16 06:25:44.830350 | orchestrator | Monday 16 February 2026 06:25:37 +0000 (0:00:00.147) 0:03:08.861 ******* 2026-02-16 06:25:44.830359 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 06:25:44.830367 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-16 06:25:44.830375 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-16 06:25:44.830382 | orchestrator | 2026-02-16 06:25:44.830390 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-16 06:25:44.830398 | orchestrator | Monday 16 February 2026 06:25:38 +0000 (0:00:00.896) 0:03:09.757 ******* 2026-02-16 06:25:44.830405 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-16 06:25:44.830422 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-16 06:25:44.830430 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-16 06:25:44.830438 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:44.830445 | orchestrator | 2026-02-16 06:25:44.830453 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-16 06:25:44.830461 | orchestrator | Monday 16 February 2026 06:25:38 +0000 (0:00:00.156) 0:03:09.915 ******* 2026-02-16 06:25:44.830469 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:44.830476 | orchestrator | 2026-02-16 06:25:44.830484 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-16 06:25:44.830496 | orchestrator | Monday 16 February 2026 06:25:39 +0000 (0:00:00.136) 0:03:10.051 ******* 2026-02-16 06:25:44.830510 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 06:25:44.830519 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 06:25:44.830527 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 06:25:44.830535 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-16 06:25:44.830543 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-16 06:25:44.830550 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-16 06:25:44.830558 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-16 06:25:44.830566 | orchestrator | 2026-02-16 06:25:44.830573 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-16 06:25:44.830581 | orchestrator | Monday 16 February 2026 06:25:40 +0000 (0:00:01.046) 0:03:11.098 ******* 2026-02-16 06:25:44.830589 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 06:25:44.830596 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 06:25:44.830604 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 06:25:44.830621 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-16 06:25:44.830629 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-16 06:25:44.830637 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-16 06:25:44.830645 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-16 06:25:44.830652 | orchestrator | 2026-02-16 06:25:44.830660 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-16 06:25:44.830668 | orchestrator | Monday 16 February 2026 06:25:41 +0000 (0:00:01.811) 0:03:12.909 ******* 2026-02-16 06:25:44.830675 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-16 06:25:44.830683 | orchestrator | 2026-02-16 06:25:44.830691 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-16 06:25:44.830699 | orchestrator | Monday 16 February 2026 06:25:43 +0000 (0:00:01.217) 0:03:14.127 ******* 2026-02-16 06:25:44.830707 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:44.830715 | orchestrator | 2026-02-16 06:25:44.830722 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-16 06:25:44.830730 | orchestrator | Monday 16 February 2026 06:25:43 +0000 (0:00:00.223) 0:03:14.351 ******* 2026-02-16 06:25:44.830738 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:25:44.830745 | orchestrator | 2026-02-16 06:25:44.830753 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-16 06:25:44.830761 | orchestrator | Monday 16 February 2026 06:25:43 +0000 (0:00:00.141) 0:03:14.492 ******* 2026-02-16 06:25:44.830769 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-16 06:25:44.830776 | orchestrator | 2026-02-16 06:25:44.830784 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-16 06:25:44.830799 | orchestrator | Monday 16 February 2026 06:25:44 +0000 (0:00:01.270) 0:03:15.762 ******* 2026-02-16 06:26:11.131847 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.131980 | orchestrator | 2026-02-16 06:26:11.132004 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-16 06:26:11.132056 | orchestrator | Monday 16 February 2026 06:25:44 +0000 (0:00:00.134) 0:03:15.896 ******* 2026-02-16 06:26:11.132067 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 06:26:11.132077 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-16 06:26:11.132086 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-16 06:26:11.132095 | orchestrator | 2026-02-16 06:26:11.132104 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-16 06:26:11.132113 | orchestrator | Monday 16 February 2026 06:25:46 +0000 (0:00:01.558) 0:03:17.455 ******* 2026-02-16 06:26:11.132122 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-02-16 06:26:11.132131 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-02-16 06:26:11.132140 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-02-16 06:26:11.132149 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-02-16 06:26:11.132158 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-02-16 06:26:11.132184 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-02-16 06:26:11.132193 | orchestrator | 2026-02-16 06:26:11.132201 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-16 06:26:11.132211 | orchestrator | Monday 16 February 2026 06:25:59 +0000 (0:00:12.647) 0:03:30.103 ******* 2026-02-16 06:26:11.132220 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 06:26:11.132252 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 06:26:11.132261 | orchestrator | 2026-02-16 06:26:11.132270 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-16 06:26:11.132278 | orchestrator | Monday 16 February 2026 06:26:02 +0000 (0:00:02.898) 0:03:33.001 ******* 2026-02-16 06:26:11.132287 | orchestrator | changed: [testbed-node-0] 2026-02-16 06:26:11.132295 | orchestrator | 2026-02-16 06:26:11.132304 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-16 06:26:11.132312 | orchestrator | Monday 16 February 2026 06:26:03 +0000 (0:00:01.542) 0:03:34.543 ******* 2026-02-16 06:26:11.132321 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-16 06:26:11.132330 | orchestrator | 2026-02-16 06:26:11.132338 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-16 06:26:11.132347 | orchestrator | Monday 16 February 2026 06:26:04 +0000 (0:00:00.548) 0:03:35.091 ******* 2026-02-16 06:26:11.132355 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-16 06:26:11.132363 | orchestrator | 2026-02-16 06:26:11.132372 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-16 06:26:11.132380 | orchestrator | Monday 16 February 2026 06:26:04 +0000 (0:00:00.790) 0:03:35.882 ******* 2026-02-16 06:26:11.132389 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:11.132398 | orchestrator | 2026-02-16 06:26:11.132407 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-16 06:26:11.132415 | orchestrator | Monday 16 February 2026 06:26:05 +0000 (0:00:00.602) 0:03:36.485 ******* 2026-02-16 06:26:11.132424 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.132432 | orchestrator | 2026-02-16 06:26:11.132441 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-16 06:26:11.132449 | orchestrator | Monday 16 February 2026 06:26:05 +0000 (0:00:00.127) 0:03:36.613 ******* 2026-02-16 06:26:11.132458 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.132466 | orchestrator | 2026-02-16 06:26:11.132475 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-16 06:26:11.132483 | orchestrator | Monday 16 February 2026 06:26:05 +0000 (0:00:00.136) 0:03:36.749 ******* 2026-02-16 06:26:11.132492 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.132501 | orchestrator | 2026-02-16 06:26:11.132509 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-16 06:26:11.132517 | orchestrator | Monday 16 February 2026 06:26:05 +0000 (0:00:00.142) 0:03:36.891 ******* 2026-02-16 06:26:11.132526 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:11.132535 | orchestrator | 2026-02-16 06:26:11.132543 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-16 06:26:11.132552 | orchestrator | Monday 16 February 2026 06:26:06 +0000 (0:00:00.534) 0:03:37.426 ******* 2026-02-16 06:26:11.132560 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.132569 | orchestrator | 2026-02-16 06:26:11.132577 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-16 06:26:11.132586 | orchestrator | Monday 16 February 2026 06:26:06 +0000 (0:00:00.117) 0:03:37.543 ******* 2026-02-16 06:26:11.132594 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.132603 | orchestrator | 2026-02-16 06:26:11.132612 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-16 06:26:11.132620 | orchestrator | Monday 16 February 2026 06:26:06 +0000 (0:00:00.122) 0:03:37.665 ******* 2026-02-16 06:26:11.132628 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:11.132637 | orchestrator | 2026-02-16 06:26:11.132645 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-16 06:26:11.132654 | orchestrator | Monday 16 February 2026 06:26:07 +0000 (0:00:00.581) 0:03:38.247 ******* 2026-02-16 06:26:11.132662 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:11.132671 | orchestrator | 2026-02-16 06:26:11.132699 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-16 06:26:11.132715 | orchestrator | Monday 16 February 2026 06:26:07 +0000 (0:00:00.569) 0:03:38.816 ******* 2026-02-16 06:26:11.132724 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.132733 | orchestrator | 2026-02-16 06:26:11.132741 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-16 06:26:11.132750 | orchestrator | Monday 16 February 2026 06:26:08 +0000 (0:00:00.146) 0:03:38.962 ******* 2026-02-16 06:26:11.132759 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:11.132767 | orchestrator | 2026-02-16 06:26:11.132776 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-16 06:26:11.132784 | orchestrator | Monday 16 February 2026 06:26:08 +0000 (0:00:00.150) 0:03:39.113 ******* 2026-02-16 06:26:11.132793 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.132802 | orchestrator | 2026-02-16 06:26:11.132810 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-16 06:26:11.132819 | orchestrator | Monday 16 February 2026 06:26:08 +0000 (0:00:00.133) 0:03:39.246 ******* 2026-02-16 06:26:11.132834 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.132849 | orchestrator | 2026-02-16 06:26:11.132862 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-16 06:26:11.132876 | orchestrator | Monday 16 February 2026 06:26:08 +0000 (0:00:00.138) 0:03:39.385 ******* 2026-02-16 06:26:11.132889 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.132901 | orchestrator | 2026-02-16 06:26:11.132915 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-16 06:26:11.132929 | orchestrator | Monday 16 February 2026 06:26:08 +0000 (0:00:00.349) 0:03:39.735 ******* 2026-02-16 06:26:11.132942 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.132955 | orchestrator | 2026-02-16 06:26:11.132969 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-16 06:26:11.132991 | orchestrator | Monday 16 February 2026 06:26:08 +0000 (0:00:00.127) 0:03:39.863 ******* 2026-02-16 06:26:11.133006 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.133042 | orchestrator | 2026-02-16 06:26:11.133056 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-16 06:26:11.133070 | orchestrator | Monday 16 February 2026 06:26:09 +0000 (0:00:00.126) 0:03:39.989 ******* 2026-02-16 06:26:11.133085 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:11.133100 | orchestrator | 2026-02-16 06:26:11.133115 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-16 06:26:11.133130 | orchestrator | Monday 16 February 2026 06:26:09 +0000 (0:00:00.152) 0:03:40.142 ******* 2026-02-16 06:26:11.133145 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:11.133160 | orchestrator | 2026-02-16 06:26:11.133176 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-16 06:26:11.133191 | orchestrator | Monday 16 February 2026 06:26:09 +0000 (0:00:00.156) 0:03:40.298 ******* 2026-02-16 06:26:11.133205 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:11.133219 | orchestrator | 2026-02-16 06:26:11.133231 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-16 06:26:11.133240 | orchestrator | Monday 16 February 2026 06:26:09 +0000 (0:00:00.205) 0:03:40.504 ******* 2026-02-16 06:26:11.133249 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.133259 | orchestrator | 2026-02-16 06:26:11.133274 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-16 06:26:11.133288 | orchestrator | Monday 16 February 2026 06:26:09 +0000 (0:00:00.136) 0:03:40.641 ******* 2026-02-16 06:26:11.133362 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.133383 | orchestrator | 2026-02-16 06:26:11.133400 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-16 06:26:11.133416 | orchestrator | Monday 16 February 2026 06:26:09 +0000 (0:00:00.127) 0:03:40.769 ******* 2026-02-16 06:26:11.133426 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.133434 | orchestrator | 2026-02-16 06:26:11.133443 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-16 06:26:11.133461 | orchestrator | Monday 16 February 2026 06:26:09 +0000 (0:00:00.137) 0:03:40.906 ******* 2026-02-16 06:26:11.133470 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.133478 | orchestrator | 2026-02-16 06:26:11.133487 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-16 06:26:11.133496 | orchestrator | Monday 16 February 2026 06:26:10 +0000 (0:00:00.120) 0:03:41.027 ******* 2026-02-16 06:26:11.133504 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.133513 | orchestrator | 2026-02-16 06:26:11.133521 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-16 06:26:11.133530 | orchestrator | Monday 16 February 2026 06:26:10 +0000 (0:00:00.116) 0:03:41.143 ******* 2026-02-16 06:26:11.133539 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.133547 | orchestrator | 2026-02-16 06:26:11.133556 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-16 06:26:11.133564 | orchestrator | Monday 16 February 2026 06:26:10 +0000 (0:00:00.141) 0:03:41.285 ******* 2026-02-16 06:26:11.133573 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.133582 | orchestrator | 2026-02-16 06:26:11.133591 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-16 06:26:11.133599 | orchestrator | Monday 16 February 2026 06:26:10 +0000 (0:00:00.363) 0:03:41.648 ******* 2026-02-16 06:26:11.133608 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.133617 | orchestrator | 2026-02-16 06:26:11.133625 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-16 06:26:11.133634 | orchestrator | Monday 16 February 2026 06:26:10 +0000 (0:00:00.135) 0:03:41.784 ******* 2026-02-16 06:26:11.133642 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.133651 | orchestrator | 2026-02-16 06:26:11.133659 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-16 06:26:11.133668 | orchestrator | Monday 16 February 2026 06:26:10 +0000 (0:00:00.129) 0:03:41.914 ******* 2026-02-16 06:26:11.133676 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:11.133685 | orchestrator | 2026-02-16 06:26:11.133694 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-16 06:26:11.133702 | orchestrator | Monday 16 February 2026 06:26:11 +0000 (0:00:00.145) 0:03:42.059 ******* 2026-02-16 06:26:30.273310 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.273420 | orchestrator | 2026-02-16 06:26:30.273435 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-16 06:26:30.273447 | orchestrator | Monday 16 February 2026 06:26:11 +0000 (0:00:00.122) 0:03:42.182 ******* 2026-02-16 06:26:30.273457 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.273467 | orchestrator | 2026-02-16 06:26:30.273477 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-16 06:26:30.273487 | orchestrator | Monday 16 February 2026 06:26:11 +0000 (0:00:00.223) 0:03:42.406 ******* 2026-02-16 06:26:30.273497 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:30.273507 | orchestrator | 2026-02-16 06:26:30.273517 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-16 06:26:30.273527 | orchestrator | Monday 16 February 2026 06:26:12 +0000 (0:00:01.021) 0:03:43.428 ******* 2026-02-16 06:26:30.273536 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:30.273546 | orchestrator | 2026-02-16 06:26:30.273555 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-16 06:26:30.273565 | orchestrator | Monday 16 February 2026 06:26:14 +0000 (0:00:01.578) 0:03:45.007 ******* 2026-02-16 06:26:30.273574 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-16 06:26:30.273584 | orchestrator | 2026-02-16 06:26:30.273594 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-16 06:26:30.273603 | orchestrator | Monday 16 February 2026 06:26:14 +0000 (0:00:00.557) 0:03:45.564 ******* 2026-02-16 06:26:30.273613 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.273669 | orchestrator | 2026-02-16 06:26:30.273681 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-16 06:26:30.273705 | orchestrator | Monday 16 February 2026 06:26:14 +0000 (0:00:00.126) 0:03:45.690 ******* 2026-02-16 06:26:30.273715 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.273725 | orchestrator | 2026-02-16 06:26:30.273734 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-16 06:26:30.273743 | orchestrator | Monday 16 February 2026 06:26:14 +0000 (0:00:00.131) 0:03:45.822 ******* 2026-02-16 06:26:30.273753 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-16 06:26:30.273762 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-16 06:26:30.273772 | orchestrator | 2026-02-16 06:26:30.273782 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-16 06:26:30.273791 | orchestrator | Monday 16 February 2026 06:26:15 +0000 (0:00:01.080) 0:03:46.902 ******* 2026-02-16 06:26:30.273800 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:30.273811 | orchestrator | 2026-02-16 06:26:30.273820 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-16 06:26:30.273830 | orchestrator | Monday 16 February 2026 06:26:16 +0000 (0:00:00.655) 0:03:47.558 ******* 2026-02-16 06:26:30.273839 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.273851 | orchestrator | 2026-02-16 06:26:30.273861 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-16 06:26:30.273872 | orchestrator | Monday 16 February 2026 06:26:16 +0000 (0:00:00.146) 0:03:47.704 ******* 2026-02-16 06:26:30.273883 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.273893 | orchestrator | 2026-02-16 06:26:30.273905 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-16 06:26:30.273915 | orchestrator | Monday 16 February 2026 06:26:16 +0000 (0:00:00.141) 0:03:47.845 ******* 2026-02-16 06:26:30.273926 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.273937 | orchestrator | 2026-02-16 06:26:30.273947 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-16 06:26:30.273958 | orchestrator | Monday 16 February 2026 06:26:17 +0000 (0:00:00.139) 0:03:47.984 ******* 2026-02-16 06:26:30.273969 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-16 06:26:30.273979 | orchestrator | 2026-02-16 06:26:30.273990 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-16 06:26:30.274001 | orchestrator | Monday 16 February 2026 06:26:17 +0000 (0:00:00.567) 0:03:48.553 ******* 2026-02-16 06:26:30.274094 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:30.274106 | orchestrator | 2026-02-16 06:26:30.274117 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-16 06:26:30.274128 | orchestrator | Monday 16 February 2026 06:26:18 +0000 (0:00:00.794) 0:03:49.347 ******* 2026-02-16 06:26:30.274139 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-16 06:26:30.274150 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-16 06:26:30.274160 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-16 06:26:30.274171 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.274182 | orchestrator | 2026-02-16 06:26:30.274193 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-16 06:26:30.274204 | orchestrator | Monday 16 February 2026 06:26:18 +0000 (0:00:00.160) 0:03:49.507 ******* 2026-02-16 06:26:30.274214 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.274223 | orchestrator | 2026-02-16 06:26:30.274233 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-16 06:26:30.274242 | orchestrator | Monday 16 February 2026 06:26:18 +0000 (0:00:00.120) 0:03:49.627 ******* 2026-02-16 06:26:30.274252 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.274270 | orchestrator | 2026-02-16 06:26:30.274280 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-16 06:26:30.274290 | orchestrator | Monday 16 February 2026 06:26:18 +0000 (0:00:00.164) 0:03:49.792 ******* 2026-02-16 06:26:30.274299 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.274309 | orchestrator | 2026-02-16 06:26:30.274318 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-16 06:26:30.274344 | orchestrator | Monday 16 February 2026 06:26:19 +0000 (0:00:00.147) 0:03:49.940 ******* 2026-02-16 06:26:30.274354 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.274363 | orchestrator | 2026-02-16 06:26:30.274373 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-16 06:26:30.274382 | orchestrator | Monday 16 February 2026 06:26:19 +0000 (0:00:00.145) 0:03:50.085 ******* 2026-02-16 06:26:30.274392 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.274401 | orchestrator | 2026-02-16 06:26:30.274411 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-16 06:26:30.274420 | orchestrator | Monday 16 February 2026 06:26:19 +0000 (0:00:00.369) 0:03:50.455 ******* 2026-02-16 06:26:30.274429 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:30.274439 | orchestrator | 2026-02-16 06:26:30.274448 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-16 06:26:30.274458 | orchestrator | Monday 16 February 2026 06:26:21 +0000 (0:00:01.649) 0:03:52.104 ******* 2026-02-16 06:26:30.274467 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:30.274477 | orchestrator | 2026-02-16 06:26:30.274486 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-16 06:26:30.274496 | orchestrator | Monday 16 February 2026 06:26:21 +0000 (0:00:00.141) 0:03:52.246 ******* 2026-02-16 06:26:30.274505 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-16 06:26:30.274515 | orchestrator | 2026-02-16 06:26:30.274524 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-16 06:26:30.274534 | orchestrator | Monday 16 February 2026 06:26:21 +0000 (0:00:00.605) 0:03:52.851 ******* 2026-02-16 06:26:30.274543 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.274552 | orchestrator | 2026-02-16 06:26:30.274562 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-16 06:26:30.274577 | orchestrator | Monday 16 February 2026 06:26:22 +0000 (0:00:00.151) 0:03:53.003 ******* 2026-02-16 06:26:30.274587 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.274596 | orchestrator | 2026-02-16 06:26:30.274606 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-16 06:26:30.274615 | orchestrator | Monday 16 February 2026 06:26:22 +0000 (0:00:00.154) 0:03:53.157 ******* 2026-02-16 06:26:30.274625 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.274634 | orchestrator | 2026-02-16 06:26:30.274643 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-16 06:26:30.274653 | orchestrator | Monday 16 February 2026 06:26:22 +0000 (0:00:00.147) 0:03:53.305 ******* 2026-02-16 06:26:30.274662 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.274672 | orchestrator | 2026-02-16 06:26:30.274681 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-16 06:26:30.274691 | orchestrator | Monday 16 February 2026 06:26:22 +0000 (0:00:00.156) 0:03:53.462 ******* 2026-02-16 06:26:30.274700 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.274709 | orchestrator | 2026-02-16 06:26:30.274719 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-16 06:26:30.274728 | orchestrator | Monday 16 February 2026 06:26:22 +0000 (0:00:00.145) 0:03:53.608 ******* 2026-02-16 06:26:30.274738 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.274747 | orchestrator | 2026-02-16 06:26:30.274757 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-16 06:26:30.274766 | orchestrator | Monday 16 February 2026 06:26:22 +0000 (0:00:00.153) 0:03:53.762 ******* 2026-02-16 06:26:30.274782 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.274792 | orchestrator | 2026-02-16 06:26:30.274809 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-16 06:26:30.274827 | orchestrator | Monday 16 February 2026 06:26:22 +0000 (0:00:00.148) 0:03:53.910 ******* 2026-02-16 06:26:30.274845 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:30.274862 | orchestrator | 2026-02-16 06:26:30.274879 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-16 06:26:30.274897 | orchestrator | Monday 16 February 2026 06:26:23 +0000 (0:00:00.151) 0:03:54.062 ******* 2026-02-16 06:26:30.274913 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:30.274928 | orchestrator | 2026-02-16 06:26:30.274944 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-16 06:26:30.274961 | orchestrator | Monday 16 February 2026 06:26:23 +0000 (0:00:00.468) 0:03:54.531 ******* 2026-02-16 06:26:30.274978 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-16 06:26:30.274994 | orchestrator | 2026-02-16 06:26:30.275046 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-16 06:26:30.275065 | orchestrator | Monday 16 February 2026 06:26:24 +0000 (0:00:00.548) 0:03:55.080 ******* 2026-02-16 06:26:30.275081 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-16 06:26:30.275099 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-16 06:26:30.275116 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-16 06:26:30.275134 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-16 06:26:30.275151 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-16 06:26:30.275168 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-16 06:26:30.275186 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-16 06:26:30.275204 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-16 06:26:30.275221 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-16 06:26:30.275238 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-16 06:26:30.275255 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-16 06:26:30.275273 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-16 06:26:30.275283 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-16 06:26:30.275293 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-16 06:26:30.275313 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-16 06:26:43.501938 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-16 06:26:43.502187 | orchestrator | 2026-02-16 06:26:43.502221 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-16 06:26:43.502244 | orchestrator | Monday 16 February 2026 06:26:30 +0000 (0:00:06.110) 0:04:01.190 ******* 2026-02-16 06:26:43.502263 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.502279 | orchestrator | 2026-02-16 06:26:43.502290 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-16 06:26:43.502302 | orchestrator | Monday 16 February 2026 06:26:30 +0000 (0:00:00.138) 0:04:01.328 ******* 2026-02-16 06:26:43.502312 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.502323 | orchestrator | 2026-02-16 06:26:43.502334 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-16 06:26:43.502345 | orchestrator | Monday 16 February 2026 06:26:30 +0000 (0:00:00.138) 0:04:01.466 ******* 2026-02-16 06:26:43.502355 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.502366 | orchestrator | 2026-02-16 06:26:43.502377 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-16 06:26:43.502387 | orchestrator | Monday 16 February 2026 06:26:30 +0000 (0:00:00.121) 0:04:01.588 ******* 2026-02-16 06:26:43.502398 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.502409 | orchestrator | 2026-02-16 06:26:43.502447 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-16 06:26:43.502459 | orchestrator | Monday 16 February 2026 06:26:30 +0000 (0:00:00.133) 0:04:01.721 ******* 2026-02-16 06:26:43.502470 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.502483 | orchestrator | 2026-02-16 06:26:43.502495 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-16 06:26:43.502524 | orchestrator | Monday 16 February 2026 06:26:30 +0000 (0:00:00.135) 0:04:01.857 ******* 2026-02-16 06:26:43.502536 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.502548 | orchestrator | 2026-02-16 06:26:43.502561 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-16 06:26:43.502574 | orchestrator | Monday 16 February 2026 06:26:31 +0000 (0:00:00.150) 0:04:02.007 ******* 2026-02-16 06:26:43.502587 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.502599 | orchestrator | 2026-02-16 06:26:43.502611 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-16 06:26:43.502624 | orchestrator | Monday 16 February 2026 06:26:31 +0000 (0:00:00.123) 0:04:02.131 ******* 2026-02-16 06:26:43.502635 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.502648 | orchestrator | 2026-02-16 06:26:43.502661 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-16 06:26:43.502673 | orchestrator | Monday 16 February 2026 06:26:31 +0000 (0:00:00.150) 0:04:02.281 ******* 2026-02-16 06:26:43.502685 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.502697 | orchestrator | 2026-02-16 06:26:43.502710 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-16 06:26:43.502722 | orchestrator | Monday 16 February 2026 06:26:31 +0000 (0:00:00.132) 0:04:02.414 ******* 2026-02-16 06:26:43.502734 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.502746 | orchestrator | 2026-02-16 06:26:43.502758 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-16 06:26:43.502770 | orchestrator | Monday 16 February 2026 06:26:31 +0000 (0:00:00.364) 0:04:02.778 ******* 2026-02-16 06:26:43.502782 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.502794 | orchestrator | 2026-02-16 06:26:43.502807 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-16 06:26:43.502819 | orchestrator | Monday 16 February 2026 06:26:31 +0000 (0:00:00.136) 0:04:02.914 ******* 2026-02-16 06:26:43.502832 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.502842 | orchestrator | 2026-02-16 06:26:43.502853 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-16 06:26:43.502865 | orchestrator | Monday 16 February 2026 06:26:32 +0000 (0:00:00.133) 0:04:03.048 ******* 2026-02-16 06:26:43.502875 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.502886 | orchestrator | 2026-02-16 06:26:43.502897 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-16 06:26:43.502908 | orchestrator | Monday 16 February 2026 06:26:32 +0000 (0:00:00.251) 0:04:03.299 ******* 2026-02-16 06:26:43.502918 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.502929 | orchestrator | 2026-02-16 06:26:43.502939 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-16 06:26:43.502950 | orchestrator | Monday 16 February 2026 06:26:32 +0000 (0:00:00.134) 0:04:03.433 ******* 2026-02-16 06:26:43.502961 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.502971 | orchestrator | 2026-02-16 06:26:43.502982 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-16 06:26:43.502993 | orchestrator | Monday 16 February 2026 06:26:32 +0000 (0:00:00.245) 0:04:03.679 ******* 2026-02-16 06:26:43.503069 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.503083 | orchestrator | 2026-02-16 06:26:43.503094 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-16 06:26:43.503104 | orchestrator | Monday 16 February 2026 06:26:32 +0000 (0:00:00.140) 0:04:03.820 ******* 2026-02-16 06:26:43.503129 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.503140 | orchestrator | 2026-02-16 06:26:43.503151 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-16 06:26:43.503163 | orchestrator | Monday 16 February 2026 06:26:33 +0000 (0:00:00.138) 0:04:03.959 ******* 2026-02-16 06:26:43.503174 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.503184 | orchestrator | 2026-02-16 06:26:43.503195 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-16 06:26:43.503205 | orchestrator | Monday 16 February 2026 06:26:33 +0000 (0:00:00.144) 0:04:04.103 ******* 2026-02-16 06:26:43.503216 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.503227 | orchestrator | 2026-02-16 06:26:43.503257 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-16 06:26:43.503269 | orchestrator | Monday 16 February 2026 06:26:33 +0000 (0:00:00.139) 0:04:04.243 ******* 2026-02-16 06:26:43.503280 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.503290 | orchestrator | 2026-02-16 06:26:43.503301 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-16 06:26:43.503311 | orchestrator | Monday 16 February 2026 06:26:33 +0000 (0:00:00.132) 0:04:04.375 ******* 2026-02-16 06:26:43.503322 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.503332 | orchestrator | 2026-02-16 06:26:43.503343 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-16 06:26:43.503354 | orchestrator | Monday 16 February 2026 06:26:33 +0000 (0:00:00.138) 0:04:04.514 ******* 2026-02-16 06:26:43.503364 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-16 06:26:43.503375 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-16 06:26:43.503386 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-16 06:26:43.503396 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.503407 | orchestrator | 2026-02-16 06:26:43.503417 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-16 06:26:43.503428 | orchestrator | Monday 16 February 2026 06:26:34 +0000 (0:00:00.688) 0:04:05.202 ******* 2026-02-16 06:26:43.503439 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-16 06:26:43.503449 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-16 06:26:43.503460 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-16 06:26:43.503470 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.503481 | orchestrator | 2026-02-16 06:26:43.503492 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-16 06:26:43.503509 | orchestrator | Monday 16 February 2026 06:26:35 +0000 (0:00:00.969) 0:04:06.172 ******* 2026-02-16 06:26:43.503520 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-16 06:26:43.503531 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-16 06:26:43.503541 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-16 06:26:43.503552 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.503563 | orchestrator | 2026-02-16 06:26:43.503573 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-16 06:26:43.503584 | orchestrator | Monday 16 February 2026 06:26:35 +0000 (0:00:00.446) 0:04:06.618 ******* 2026-02-16 06:26:43.503594 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.503605 | orchestrator | 2026-02-16 06:26:43.503615 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-16 06:26:43.503626 | orchestrator | Monday 16 February 2026 06:26:35 +0000 (0:00:00.154) 0:04:06.773 ******* 2026-02-16 06:26:43.503637 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-16 06:26:43.503647 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.503658 | orchestrator | 2026-02-16 06:26:43.503668 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-16 06:26:43.503679 | orchestrator | Monday 16 February 2026 06:26:36 +0000 (0:00:00.656) 0:04:07.429 ******* 2026-02-16 06:26:43.503717 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:43.503728 | orchestrator | 2026-02-16 06:26:43.503739 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-16 06:26:43.503750 | orchestrator | Monday 16 February 2026 06:26:37 +0000 (0:00:01.002) 0:04:08.432 ******* 2026-02-16 06:26:43.503760 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:43.503771 | orchestrator | 2026-02-16 06:26:43.503782 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-16 06:26:43.503792 | orchestrator | Monday 16 February 2026 06:26:37 +0000 (0:00:00.159) 0:04:08.591 ******* 2026-02-16 06:26:43.503803 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-02-16 06:26:43.503814 | orchestrator | 2026-02-16 06:26:43.503825 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-16 06:26:43.503836 | orchestrator | Monday 16 February 2026 06:26:38 +0000 (0:00:00.615) 0:04:09.207 ******* 2026-02-16 06:26:43.503846 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-16 06:26:43.503857 | orchestrator | 2026-02-16 06:26:43.503868 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-16 06:26:43.503878 | orchestrator | Monday 16 February 2026 06:26:40 +0000 (0:00:02.198) 0:04:11.406 ******* 2026-02-16 06:26:43.503889 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:26:43.503900 | orchestrator | 2026-02-16 06:26:43.503911 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-16 06:26:43.503922 | orchestrator | Monday 16 February 2026 06:26:40 +0000 (0:00:00.152) 0:04:11.558 ******* 2026-02-16 06:26:43.503932 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:43.503943 | orchestrator | 2026-02-16 06:26:43.503954 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-16 06:26:43.503964 | orchestrator | Monday 16 February 2026 06:26:40 +0000 (0:00:00.151) 0:04:11.710 ******* 2026-02-16 06:26:43.503975 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:43.503986 | orchestrator | 2026-02-16 06:26:43.503996 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-16 06:26:43.504055 | orchestrator | Monday 16 February 2026 06:26:41 +0000 (0:00:00.430) 0:04:12.140 ******* 2026-02-16 06:26:43.504066 | orchestrator | changed: [testbed-node-0] 2026-02-16 06:26:43.504077 | orchestrator | 2026-02-16 06:26:43.504088 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-16 06:26:43.504098 | orchestrator | Monday 16 February 2026 06:26:42 +0000 (0:00:01.142) 0:04:13.282 ******* 2026-02-16 06:26:43.504109 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:43.504120 | orchestrator | 2026-02-16 06:26:43.504130 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-16 06:26:43.504141 | orchestrator | Monday 16 February 2026 06:26:42 +0000 (0:00:00.634) 0:04:13.916 ******* 2026-02-16 06:26:43.504151 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:26:43.504162 | orchestrator | 2026-02-16 06:26:43.504180 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-16 06:27:36.684966 | orchestrator | Monday 16 February 2026 06:26:43 +0000 (0:00:00.512) 0:04:14.429 ******* 2026-02-16 06:27:36.685154 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:27:36.685175 | orchestrator | 2026-02-16 06:27:36.685189 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-16 06:27:36.685201 | orchestrator | Monday 16 February 2026 06:26:43 +0000 (0:00:00.496) 0:04:14.925 ******* 2026-02-16 06:27:36.685213 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:27:36.685224 | orchestrator | 2026-02-16 06:27:36.685236 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-16 06:27:36.685248 | orchestrator | Monday 16 February 2026 06:26:44 +0000 (0:00:00.758) 0:04:15.683 ******* 2026-02-16 06:27:36.685259 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:27:36.685270 | orchestrator | 2026-02-16 06:27:36.685281 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-16 06:27:36.685317 | orchestrator | Monday 16 February 2026 06:26:45 +0000 (0:00:00.695) 0:04:16.379 ******* 2026-02-16 06:27:36.685329 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-16 06:27:36.685341 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-16 06:27:36.685352 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-16 06:27:36.685362 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-02-16 06:27:36.685373 | orchestrator | 2026-02-16 06:27:36.685384 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-16 06:27:36.685394 | orchestrator | Monday 16 February 2026 06:26:48 +0000 (0:00:02.927) 0:04:19.307 ******* 2026-02-16 06:27:36.685405 | orchestrator | changed: [testbed-node-0] 2026-02-16 06:27:36.685416 | orchestrator | 2026-02-16 06:27:36.685426 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-16 06:27:36.685451 | orchestrator | Monday 16 February 2026 06:26:49 +0000 (0:00:01.035) 0:04:20.343 ******* 2026-02-16 06:27:36.685463 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:27:36.685474 | orchestrator | 2026-02-16 06:27:36.685485 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-16 06:27:36.685497 | orchestrator | Monday 16 February 2026 06:26:49 +0000 (0:00:00.141) 0:04:20.484 ******* 2026-02-16 06:27:36.685510 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:27:36.685523 | orchestrator | 2026-02-16 06:27:36.685536 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-16 06:27:36.685548 | orchestrator | Monday 16 February 2026 06:26:49 +0000 (0:00:00.138) 0:04:20.622 ******* 2026-02-16 06:27:36.685560 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:27:36.685573 | orchestrator | 2026-02-16 06:27:36.685584 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-16 06:27:36.685595 | orchestrator | Monday 16 February 2026 06:26:50 +0000 (0:00:01.021) 0:04:21.644 ******* 2026-02-16 06:27:36.685606 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:27:36.685616 | orchestrator | 2026-02-16 06:27:36.685627 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-16 06:27:36.685638 | orchestrator | Monday 16 February 2026 06:26:51 +0000 (0:00:00.480) 0:04:22.124 ******* 2026-02-16 06:27:36.685648 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:27:36.685659 | orchestrator | 2026-02-16 06:27:36.685670 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-16 06:27:36.685681 | orchestrator | Monday 16 February 2026 06:26:51 +0000 (0:00:00.410) 0:04:22.535 ******* 2026-02-16 06:27:36.685691 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-02-16 06:27:36.685703 | orchestrator | 2026-02-16 06:27:36.685713 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-16 06:27:36.685724 | orchestrator | Monday 16 February 2026 06:26:52 +0000 (0:00:00.554) 0:04:23.090 ******* 2026-02-16 06:27:36.685735 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:27:36.685745 | orchestrator | 2026-02-16 06:27:36.685756 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-16 06:27:36.685767 | orchestrator | Monday 16 February 2026 06:26:52 +0000 (0:00:00.148) 0:04:23.238 ******* 2026-02-16 06:27:36.685777 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:27:36.685788 | orchestrator | 2026-02-16 06:27:36.685799 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-16 06:27:36.685810 | orchestrator | Monday 16 February 2026 06:26:52 +0000 (0:00:00.124) 0:04:23.363 ******* 2026-02-16 06:27:36.685820 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-02-16 06:27:36.685831 | orchestrator | 2026-02-16 06:27:36.685842 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-16 06:27:36.685852 | orchestrator | Monday 16 February 2026 06:26:53 +0000 (0:00:00.608) 0:04:23.971 ******* 2026-02-16 06:27:36.685863 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:27:36.685874 | orchestrator | 2026-02-16 06:27:36.685885 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-16 06:27:36.685905 | orchestrator | Monday 16 February 2026 06:26:54 +0000 (0:00:01.336) 0:04:25.307 ******* 2026-02-16 06:27:36.685916 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:27:36.685927 | orchestrator | 2026-02-16 06:27:36.685937 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-16 06:27:36.685948 | orchestrator | Monday 16 February 2026 06:26:55 +0000 (0:00:00.995) 0:04:26.303 ******* 2026-02-16 06:27:36.685959 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:27:36.685970 | orchestrator | 2026-02-16 06:27:36.686079 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-16 06:27:36.686098 | orchestrator | Monday 16 February 2026 06:26:56 +0000 (0:00:01.421) 0:04:27.725 ******* 2026-02-16 06:27:36.686109 | orchestrator | changed: [testbed-node-0] 2026-02-16 06:27:36.686120 | orchestrator | 2026-02-16 06:27:36.686131 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-16 06:27:36.686142 | orchestrator | Monday 16 February 2026 06:26:59 +0000 (0:00:02.274) 0:04:29.999 ******* 2026-02-16 06:27:36.686153 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-02-16 06:27:36.686171 | orchestrator | 2026-02-16 06:27:36.686213 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-16 06:27:36.686232 | orchestrator | Monday 16 February 2026 06:26:59 +0000 (0:00:00.574) 0:04:30.574 ******* 2026-02-16 06:27:36.686250 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-16 06:27:36.686267 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:27:36.686283 | orchestrator | 2026-02-16 06:27:36.686300 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-16 06:27:36.686317 | orchestrator | Monday 16 February 2026 06:27:21 +0000 (0:00:22.346) 0:04:52.920 ******* 2026-02-16 06:27:36.686335 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:27:36.686352 | orchestrator | 2026-02-16 06:27:36.686370 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-16 06:27:36.686389 | orchestrator | Monday 16 February 2026 06:27:24 +0000 (0:00:02.032) 0:04:54.952 ******* 2026-02-16 06:27:36.686407 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:27:36.686425 | orchestrator | 2026-02-16 06:27:36.686447 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-16 06:27:36.686466 | orchestrator | Monday 16 February 2026 06:27:24 +0000 (0:00:00.120) 0:04:55.073 ******* 2026-02-16 06:27:36.686488 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ad09daec448394448226c52ec1e81d37692336f3'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-16 06:27:36.686518 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ad09daec448394448226c52ec1e81d37692336f3'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-16 06:27:36.686537 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ad09daec448394448226c52ec1e81d37692336f3'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-16 06:27:36.686556 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ad09daec448394448226c52ec1e81d37692336f3'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-16 06:27:36.686590 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ad09daec448394448226c52ec1e81d37692336f3'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-16 06:27:36.686611 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ad09daec448394448226c52ec1e81d37692336f3'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__ad09daec448394448226c52ec1e81d37692336f3'}])  2026-02-16 06:27:36.686631 | orchestrator | 2026-02-16 06:27:36.686650 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-16 06:27:36.686668 | orchestrator | Monday 16 February 2026 06:27:33 +0000 (0:00:09.399) 0:05:04.473 ******* 2026-02-16 06:27:36.686688 | orchestrator | changed: [testbed-node-0] 2026-02-16 06:27:36.686707 | orchestrator | 2026-02-16 06:27:36.686726 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-16 06:27:36.686745 | orchestrator | Monday 16 February 2026 06:27:35 +0000 (0:00:01.509) 0:05:05.982 ******* 2026-02-16 06:27:36.686758 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-16 06:27:36.686769 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-16 06:27:36.686779 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-16 06:27:36.686790 | orchestrator | 2026-02-16 06:27:36.686801 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-16 06:27:36.686811 | orchestrator | Monday 16 February 2026 06:27:36 +0000 (0:00:01.172) 0:05:07.155 ******* 2026-02-16 06:27:36.686822 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-16 06:27:36.686833 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-16 06:27:36.686844 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-16 06:27:36.686855 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:27:36.686865 | orchestrator | 2026-02-16 06:27:36.686888 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-16 06:58:56.014214 | orchestrator | Monday 16 February 2026 06:27:36 +0000 (0:00:00.454) 0:05:07.609 ******* 2026-02-16 06:58:56.014405 | orchestrator | skipping: [testbed-node-0] 2026-02-16 06:58:56.014421 | orchestrator | 2026-02-16 06:58:56.014430 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-16 06:58:56.014439 | orchestrator | Monday 16 February 2026 06:27:36 +0000 (0:00:00.128) 0:05:07.738 ******* 2026-02-16 06:58:56.014447 | orchestrator | 2026-02-16 06:58:56.014455 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014463 | orchestrator | 2026-02-16 06:58:56.014471 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014478 | orchestrator | 2026-02-16 06:58:56.014485 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014493 | orchestrator | 2026-02-16 06:58:56.014500 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014507 | orchestrator | 2026-02-16 06:58:56.014514 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014522 | orchestrator | 2026-02-16 06:58:56.014529 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014537 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (5 retries left). 2026-02-16 06:58:56.014565 | orchestrator | 2026-02-16 06:58:56.014586 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014594 | orchestrator | 2026-02-16 06:58:56.014601 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014608 | orchestrator | 2026-02-16 06:58:56.014615 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014622 | orchestrator | 2026-02-16 06:58:56.014630 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014637 | orchestrator | 2026-02-16 06:58:56.014646 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014656 | orchestrator | 2026-02-16 06:58:56.014665 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014673 | orchestrator | 2026-02-16 06:58:56.014682 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014690 | orchestrator | 2026-02-16 06:58:56.014699 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014707 | orchestrator | 2026-02-16 06:58:56.014715 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014724 | orchestrator | 2026-02-16 06:58:56.014733 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014741 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (4 retries left). 2026-02-16 06:58:56.014750 | orchestrator | 2026-02-16 06:58:56.014758 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014767 | orchestrator | 2026-02-16 06:58:56.014776 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014784 | orchestrator | 2026-02-16 06:58:56.014792 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014801 | orchestrator | 2026-02-16 06:58:56.014810 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014818 | orchestrator | 2026-02-16 06:58:56.014827 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014834 | orchestrator | 2026-02-16 06:58:56.014841 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014849 | orchestrator | 2026-02-16 06:58:56.014856 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014863 | orchestrator | 2026-02-16 06:58:56.014871 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014878 | orchestrator | 2026-02-16 06:58:56.014885 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014893 | orchestrator | 2026-02-16 06:58:56.014901 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014910 | orchestrator | 2026-02-16 06:58:56.014919 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014928 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (3 retries left). 2026-02-16 06:58:56.014937 | orchestrator | 2026-02-16 06:58:56.014945 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014954 | orchestrator | 2026-02-16 06:58:56.014970 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014979 | orchestrator | 2026-02-16 06:58:56.014988 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.014997 | orchestrator | 2026-02-16 06:58:56.015005 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015014 | orchestrator | 2026-02-16 06:58:56.015039 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015049 | orchestrator | 2026-02-16 06:58:56.015058 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015066 | orchestrator | 2026-02-16 06:58:56.015075 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015084 | orchestrator | 2026-02-16 06:58:56.015093 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015102 | orchestrator | 2026-02-16 06:58:56.015110 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015123 | orchestrator | 2026-02-16 06:58:56.015138 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015152 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (2 retries left). 2026-02-16 06:58:56.015165 | orchestrator | 2026-02-16 06:58:56.015179 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015193 | orchestrator | 2026-02-16 06:58:56.015209 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015224 | orchestrator | 2026-02-16 06:58:56.015239 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015253 | orchestrator | 2026-02-16 06:58:56.015268 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015276 | orchestrator | 2026-02-16 06:58:56.015285 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015322 | orchestrator | 2026-02-16 06:58:56.015336 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015351 | orchestrator | 2026-02-16 06:58:56.015366 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015382 | orchestrator | 2026-02-16 06:58:56.015396 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015412 | orchestrator | 2026-02-16 06:58:56.015421 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015430 | orchestrator | 2026-02-16 06:58:56.015438 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015447 | orchestrator | 2026-02-16 06:58:56.015456 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015464 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (1 retries left). 2026-02-16 06:58:56.015473 | orchestrator | 2026-02-16 06:58:56.015482 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015490 | orchestrator | 2026-02-16 06:58:56.015499 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015508 | orchestrator | 2026-02-16 06:58:56.015517 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015535 | orchestrator | 2026-02-16 06:58:56.015544 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015553 | orchestrator | 2026-02-16 06:58:56.015561 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015570 | orchestrator | 2026-02-16 06:58:56.015579 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015588 | orchestrator | 2026-02-16 06:58:56.015597 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015605 | orchestrator | 2026-02-16 06:58:56.015614 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015623 | orchestrator | 2026-02-16 06:58:56.015632 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015640 | orchestrator | 2026-02-16 06:58:56.015649 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-16 06:58:56.015658 | orchestrator | [WARNING]: Failure using method (v2_runner_on_failed) in callback plugin 2026-02-16 06:58:56.015667 | orchestrator | (): '31ef3edc-a46e-bba2-1742-000000000297' 2026-02-16 06:58:56.015714 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"attempts": 5, "changed": false, "cmd": ["docker", "exec", "ceph-mon-testbed-node-0", "ceph", "--cluster", "ceph", "-m", "192.168.16.8", "quorum_status", "--format", "json"], "delta": "0:05:00.274576", "end": "2026-02-16 06:58:55.749698", "msg": "non-zero return code", "rc": 1, "start": "2026-02-16 06:53:55.475122", "stderr": "2026-02-16T06:58:55.723+0000 72743af5d640 0 monclient(hunting): authenticate timed out after 300\n[errno 110] RADOS timed out (error connecting to the cluster)", "stderr_lines": ["2026-02-16T06:58:55.723+0000 72743af5d640 0 monclient(hunting): authenticate timed out after 300", "[errno 110] RADOS timed out (error connecting to the cluster)"], "stdout": "", "stdout_lines": []} 2026-02-16 06:58:59.770464 | orchestrator | 2026-02-16 06:58:59.770566 | orchestrator | TASK [Unmask the mon service] ************************************************** 2026-02-16 06:58:59.770583 | orchestrator | Monday 16 February 2026 06:58:55 +0000 (0:31:19.197) 0:36:26.935 ******* 2026-02-16 06:58:59.770595 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:58:59.770608 | orchestrator | 2026-02-16 06:58:59.770619 | orchestrator | TASK [Unmask the mgr service] ************************************************** 2026-02-16 06:58:59.770631 | orchestrator | Monday 16 February 2026 06:58:56 +0000 (0:00:00.834) 0:36:27.770 ******* 2026-02-16 06:58:59.770642 | orchestrator | ok: [testbed-node-0] 2026-02-16 06:58:59.770653 | orchestrator | 2026-02-16 06:58:59.770664 | orchestrator | TASK [Stop the playbook execution] ********************************************* 2026-02-16 06:58:59.770675 | orchestrator | Monday 16 February 2026 06:58:57 +0000 (0:00:01.077) 0:36:28.848 ******* 2026-02-16 06:58:59.770686 | orchestrator | [WARNING]: Failure using method (v2_runner_on_failed) in callback plugin 2026-02-16 06:58:59.770697 | orchestrator | (): '31ef3edc-a46e-bba2-1742-0000000002a2' 2026-02-16 06:58:59.770720 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "There was an error during monitor upgrade. Please, check the previous task results."} 2026-02-16 06:58:59.770732 | orchestrator | 2026-02-16 06:58:59.770743 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-16 06:58:59.770770 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-16 06:58:59.770782 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-16 06:58:59.770817 | orchestrator | testbed-node-0 : ok=121  changed=7  unreachable=0 failed=1  skipped=164  rescued=1  ignored=0 2026-02-16 06:58:59.770830 | orchestrator | testbed-node-1 : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-16 06:58:59.770841 | orchestrator | testbed-node-2 : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-16 06:58:59.770852 | orchestrator | testbed-node-3 : ok=33  changed=1  unreachable=0 failed=0 skipped=74  rescued=0 ignored=0 2026-02-16 06:58:59.770910 | orchestrator | testbed-node-4 : ok=33  changed=1  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-02-16 06:58:59.770925 | orchestrator | testbed-node-5 : ok=33  changed=1  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-02-16 06:58:59.770936 | orchestrator | 2026-02-16 06:58:59.770947 | orchestrator | 2026-02-16 06:58:59.770957 | orchestrator | 2026-02-16 06:58:59.770968 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-16 06:58:59.770979 | orchestrator | Monday 16 February 2026 06:58:59 +0000 (0:00:01.261) 0:36:30.109 ******* 2026-02-16 06:58:59.770990 | orchestrator | =============================================================================== 2026-02-16 06:58:59.771001 | orchestrator | Container | waiting for the containerized monitor to join the quorum... 1879.20s 2026-02-16 06:58:59.771013 | orchestrator | Gather and delegate facts ---------------------------------------------- 31.12s 2026-02-16 06:58:59.771025 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.35s 2026-02-16 06:58:59.771037 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 12.65s 2026-02-16 06:58:59.771049 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 10.70s 2026-02-16 06:58:59.771061 | orchestrator | Set cluster configs ---------------------------------------------------- 10.33s 2026-02-16 06:58:59.771073 | orchestrator | ceph-mon : Set cluster configs ------------------------------------------ 9.40s 2026-02-16 06:58:59.771085 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.11s 2026-02-16 06:58:59.771097 | orchestrator | Gather facts ------------------------------------------------------------ 3.67s 2026-02-16 06:58:59.771110 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 3.41s 2026-02-16 06:58:59.771122 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 2.93s 2026-02-16 06:58:59.771134 | orchestrator | Stop ceph mon ----------------------------------------------------------- 2.90s 2026-02-16 06:58:59.771146 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.36s 2026-02-16 06:58:59.771158 | orchestrator | ceph-mon : Start the monitor service ------------------------------------ 2.27s 2026-02-16 06:58:59.771170 | orchestrator | ceph-infra : Add logrotate configuration -------------------------------- 2.22s 2026-02-16 06:58:59.771183 | orchestrator | ceph-mon : Check if monitor initial keyring already exists -------------- 2.20s 2026-02-16 06:58:59.771194 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.19s 2026-02-16 06:58:59.771205 | orchestrator | ceph-validate : Include check_system.yml -------------------------------- 2.12s 2026-02-16 06:58:59.771216 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.07s 2026-02-16 06:58:59.771243 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 2.03s 2026-02-16 06:59:00.435321 | orchestrator | ERROR 2026-02-16 06:59:00.435777 | orchestrator | { 2026-02-16 06:59:00.435885 | orchestrator | "delta": "2:00:27.420733", 2026-02-16 06:59:00.435955 | orchestrator | "end": "2026-02-16 06:59:00.062960", 2026-02-16 06:59:00.436057 | orchestrator | "msg": "non-zero return code", 2026-02-16 06:59:00.436111 | orchestrator | "rc": 2, 2026-02-16 06:59:00.436189 | orchestrator | "start": "2026-02-16 04:58:32.642227" 2026-02-16 06:59:00.436273 | orchestrator | } failure 2026-02-16 06:59:00.662384 | 2026-02-16 06:59:00.662509 | PLAY RECAP 2026-02-16 06:59:00.662569 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-02-16 06:59:00.662593 | 2026-02-16 06:59:00.879796 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-02-16 06:59:00.881618 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-16 06:59:01.620935 | 2026-02-16 06:59:01.621113 | PLAY [Post output play] 2026-02-16 06:59:01.647224 | 2026-02-16 06:59:01.647396 | LOOP [stage-output : Register sources] 2026-02-16 06:59:01.713277 | 2026-02-16 06:59:01.713574 | TASK [stage-output : Check sudo] 2026-02-16 06:59:02.562613 | orchestrator | sudo: a password is required 2026-02-16 06:59:02.750214 | orchestrator | ok: Runtime: 0:00:00.012924 2026-02-16 06:59:02.764286 | 2026-02-16 06:59:02.764444 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-16 06:59:02.799809 | 2026-02-16 06:59:02.800101 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-16 06:59:02.868550 | orchestrator | ok 2026-02-16 06:59:02.877271 | 2026-02-16 06:59:02.877399 | LOOP [stage-output : Ensure target folders exist] 2026-02-16 06:59:03.338555 | orchestrator | ok: "docs" 2026-02-16 06:59:03.338902 | 2026-02-16 06:59:03.590328 | orchestrator | ok: "artifacts" 2026-02-16 06:59:03.849908 | orchestrator | ok: "logs" 2026-02-16 06:59:03.865801 | 2026-02-16 06:59:03.865972 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-16 06:59:03.903180 | 2026-02-16 06:59:03.903489 | TASK [stage-output : Make all log files readable] 2026-02-16 06:59:04.207416 | orchestrator | ok 2026-02-16 06:59:04.216878 | 2026-02-16 06:59:04.217042 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-16 06:59:04.252108 | orchestrator | skipping: Conditional result was False 2026-02-16 06:59:04.268114 | 2026-02-16 06:59:04.268284 | TASK [stage-output : Discover log files for compression] 2026-02-16 06:59:04.293175 | orchestrator | skipping: Conditional result was False 2026-02-16 06:59:04.309054 | 2026-02-16 06:59:04.309224 | LOOP [stage-output : Archive everything from logs] 2026-02-16 06:59:04.360030 | 2026-02-16 06:59:04.360221 | PLAY [Post cleanup play] 2026-02-16 06:59:04.369857 | 2026-02-16 06:59:04.369967 | TASK [Set cloud fact (Zuul deployment)] 2026-02-16 06:59:04.453515 | orchestrator | ok 2026-02-16 06:59:04.463731 | 2026-02-16 06:59:04.463857 | TASK [Set cloud fact (local deployment)] 2026-02-16 06:59:04.499685 | orchestrator | skipping: Conditional result was False 2026-02-16 06:59:04.516683 | 2026-02-16 06:59:04.516853 | TASK [Clean the cloud environment] 2026-02-16 06:59:05.171209 | orchestrator | 2026-02-16 06:59:05 - clean up servers 2026-02-16 06:59:05.926554 | orchestrator | 2026-02-16 06:59:05 - testbed-manager 2026-02-16 06:59:06.011999 | orchestrator | 2026-02-16 06:59:06 - testbed-node-1 2026-02-16 06:59:06.108735 | orchestrator | 2026-02-16 06:59:06 - testbed-node-4 2026-02-16 06:59:06.197750 | orchestrator | 2026-02-16 06:59:06 - testbed-node-2 2026-02-16 06:59:06.287048 | orchestrator | 2026-02-16 06:59:06 - testbed-node-5 2026-02-16 06:59:06.378573 | orchestrator | 2026-02-16 06:59:06 - testbed-node-3 2026-02-16 06:59:06.482440 | orchestrator | 2026-02-16 06:59:06 - testbed-node-0 2026-02-16 06:59:06.595810 | orchestrator | 2026-02-16 06:59:06 - clean up keypairs 2026-02-16 06:59:06.613655 | orchestrator | 2026-02-16 06:59:06 - testbed 2026-02-16 06:59:06.639643 | orchestrator | 2026-02-16 06:59:06 - wait for servers to be gone 2026-02-16 06:59:15.451812 | orchestrator | 2026-02-16 06:59:15 - clean up ports 2026-02-16 06:59:15.648852 | orchestrator | 2026-02-16 06:59:15 - 0ebb2cee-d845-44aa-9027-a0960b1078f3 2026-02-16 06:59:15.891829 | orchestrator | 2026-02-16 06:59:15 - 3215954a-0867-4e25-b916-f7d7c86dd4b6 2026-02-16 06:59:16.147234 | orchestrator | 2026-02-16 06:59:16 - 5cfb8811-bb64-4a48-a9b6-5f2596acbc86 2026-02-16 06:59:16.478974 | orchestrator | 2026-02-16 06:59:16 - 8b1f0b10-a0d8-44d7-a7e6-f9fdda30d427 2026-02-16 06:59:16.706489 | orchestrator | 2026-02-16 06:59:16 - c7ab9dd6-e8fd-467f-bf2d-8d97b6e783e6 2026-02-16 06:59:16.912022 | orchestrator | 2026-02-16 06:59:16 - d7e6e1a6-2255-441d-b034-54da3e3b7a74 2026-02-16 06:59:17.341757 | orchestrator | 2026-02-16 06:59:17 - ff4b3066-d4c5-4af8-a0cd-97adb75e3466 2026-02-16 06:59:17.565223 | orchestrator | 2026-02-16 06:59:17 - clean up volumes 2026-02-16 06:59:17.685063 | orchestrator | 2026-02-16 06:59:17 - testbed-volume-5-node-base 2026-02-16 06:59:17.728716 | orchestrator | 2026-02-16 06:59:17 - testbed-volume-1-node-base 2026-02-16 06:59:17.770107 | orchestrator | 2026-02-16 06:59:17 - testbed-volume-2-node-base 2026-02-16 06:59:17.817229 | orchestrator | 2026-02-16 06:59:17 - testbed-volume-4-node-base 2026-02-16 06:59:17.867098 | orchestrator | 2026-02-16 06:59:17 - testbed-volume-0-node-base 2026-02-16 06:59:17.909765 | orchestrator | 2026-02-16 06:59:17 - testbed-volume-3-node-base 2026-02-16 06:59:17.955929 | orchestrator | 2026-02-16 06:59:17 - testbed-volume-manager-base 2026-02-16 06:59:17.999720 | orchestrator | 2026-02-16 06:59:17 - testbed-volume-6-node-3 2026-02-16 06:59:18.044334 | orchestrator | 2026-02-16 06:59:18 - testbed-volume-8-node-5 2026-02-16 06:59:18.092278 | orchestrator | 2026-02-16 06:59:18 - testbed-volume-2-node-5 2026-02-16 06:59:18.143329 | orchestrator | 2026-02-16 06:59:18 - testbed-volume-4-node-4 2026-02-16 06:59:18.188911 | orchestrator | 2026-02-16 06:59:18 - testbed-volume-5-node-5 2026-02-16 06:59:18.229955 | orchestrator | 2026-02-16 06:59:18 - testbed-volume-1-node-4 2026-02-16 06:59:18.277961 | orchestrator | 2026-02-16 06:59:18 - testbed-volume-7-node-4 2026-02-16 06:59:18.322333 | orchestrator | 2026-02-16 06:59:18 - testbed-volume-3-node-3 2026-02-16 06:59:18.362708 | orchestrator | 2026-02-16 06:59:18 - testbed-volume-0-node-3 2026-02-16 06:59:18.404494 | orchestrator | 2026-02-16 06:59:18 - disconnect routers 2026-02-16 06:59:18.541459 | orchestrator | 2026-02-16 06:59:18 - testbed 2026-02-16 06:59:19.512762 | orchestrator | 2026-02-16 06:59:19 - clean up subnets 2026-02-16 06:59:19.578669 | orchestrator | 2026-02-16 06:59:19 - subnet-testbed-management 2026-02-16 06:59:19.748956 | orchestrator | 2026-02-16 06:59:19 - clean up networks 2026-02-16 06:59:19.917641 | orchestrator | 2026-02-16 06:59:19 - net-testbed-management 2026-02-16 06:59:20.196607 | orchestrator | 2026-02-16 06:59:20 - clean up security groups 2026-02-16 06:59:20.233616 | orchestrator | 2026-02-16 06:59:20 - testbed-node 2026-02-16 06:59:20.348737 | orchestrator | 2026-02-16 06:59:20 - testbed-management 2026-02-16 06:59:20.457598 | orchestrator | 2026-02-16 06:59:20 - clean up floating ips 2026-02-16 06:59:20.491301 | orchestrator | 2026-02-16 06:59:20 - 81.163.192.120 2026-02-16 06:59:20.846186 | orchestrator | 2026-02-16 06:59:20 - clean up routers 2026-02-16 06:59:20.967219 | orchestrator | 2026-02-16 06:59:20 - testbed 2026-02-16 06:59:22.575921 | orchestrator | ok: Runtime: 0:00:17.513815 2026-02-16 06:59:22.579670 | 2026-02-16 06:59:22.579847 | PLAY RECAP 2026-02-16 06:59:22.579969 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-16 06:59:22.580055 | 2026-02-16 06:59:22.715006 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-16 06:59:22.717428 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-16 06:59:23.449511 | 2026-02-16 06:59:23.449675 | PLAY [Cleanup play] 2026-02-16 06:59:23.465651 | 2026-02-16 06:59:23.465791 | TASK [Set cloud fact (Zuul deployment)] 2026-02-16 06:59:23.523463 | orchestrator | ok 2026-02-16 06:59:23.533956 | 2026-02-16 06:59:23.534145 | TASK [Set cloud fact (local deployment)] 2026-02-16 06:59:23.560053 | orchestrator | skipping: Conditional result was False 2026-02-16 06:59:23.570609 | 2026-02-16 06:59:23.570725 | TASK [Clean the cloud environment] 2026-02-16 06:59:24.728790 | orchestrator | 2026-02-16 06:59:24 - clean up servers 2026-02-16 06:59:25.202966 | orchestrator | 2026-02-16 06:59:25 - clean up keypairs 2026-02-16 06:59:25.220423 | orchestrator | 2026-02-16 06:59:25 - wait for servers to be gone 2026-02-16 06:59:25.261195 | orchestrator | 2026-02-16 06:59:25 - clean up ports 2026-02-16 06:59:25.348273 | orchestrator | 2026-02-16 06:59:25 - clean up volumes 2026-02-16 06:59:25.413052 | orchestrator | 2026-02-16 06:59:25 - disconnect routers 2026-02-16 06:59:25.441953 | orchestrator | 2026-02-16 06:59:25 - clean up subnets 2026-02-16 06:59:25.467441 | orchestrator | 2026-02-16 06:59:25 - clean up networks 2026-02-16 06:59:25.627797 | orchestrator | 2026-02-16 06:59:25 - clean up security groups 2026-02-16 06:59:25.660379 | orchestrator | 2026-02-16 06:59:25 - clean up floating ips 2026-02-16 06:59:25.684549 | orchestrator | 2026-02-16 06:59:25 - clean up routers 2026-02-16 06:59:26.107711 | orchestrator | ok: Runtime: 0:00:01.368797 2026-02-16 06:59:26.111256 | 2026-02-16 06:59:26.111435 | PLAY RECAP 2026-02-16 06:59:26.111557 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-16 06:59:26.111620 | 2026-02-16 06:59:26.239117 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-16 06:59:26.240118 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-16 06:59:26.998507 | 2026-02-16 06:59:26.998669 | PLAY [Base post-fetch] 2026-02-16 06:59:27.013976 | 2026-02-16 06:59:27.014122 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-16 06:59:27.069054 | orchestrator | skipping: Conditional result was False 2026-02-16 06:59:27.079517 | 2026-02-16 06:59:27.079712 | TASK [fetch-output : Set log path for single node] 2026-02-16 06:59:27.136109 | orchestrator | ok 2026-02-16 06:59:27.144664 | 2026-02-16 06:59:27.144795 | LOOP [fetch-output : Ensure local output dirs] 2026-02-16 06:59:27.639117 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/38b924e1c53c45ae91259ec19ed86344/work/logs" 2026-02-16 06:59:27.918150 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/38b924e1c53c45ae91259ec19ed86344/work/artifacts" 2026-02-16 06:59:28.177927 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/38b924e1c53c45ae91259ec19ed86344/work/docs" 2026-02-16 06:59:28.199728 | 2026-02-16 06:59:28.199921 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-16 06:59:29.148094 | orchestrator | changed: .d..t...... ./ 2026-02-16 06:59:29.148569 | orchestrator | changed: All items complete 2026-02-16 06:59:29.148651 | 2026-02-16 06:59:29.872096 | orchestrator | changed: .d..t...... ./ 2026-02-16 06:59:30.606372 | orchestrator | changed: .d..t...... ./ 2026-02-16 06:59:30.636487 | 2026-02-16 06:59:30.636630 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-16 06:59:30.671590 | orchestrator | skipping: Conditional result was False 2026-02-16 06:59:30.674383 | orchestrator | skipping: Conditional result was False 2026-02-16 06:59:30.695785 | 2026-02-16 06:59:30.695903 | PLAY RECAP 2026-02-16 06:59:30.695985 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-16 06:59:30.696027 | 2026-02-16 06:59:30.818555 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-16 06:59:30.820987 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-16 06:59:31.550873 | 2026-02-16 06:59:31.551037 | PLAY [Base post] 2026-02-16 06:59:31.565455 | 2026-02-16 06:59:31.565595 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-16 06:59:32.570371 | orchestrator | changed 2026-02-16 06:59:32.580696 | 2026-02-16 06:59:32.580818 | PLAY RECAP 2026-02-16 06:59:32.580892 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-16 06:59:32.580968 | 2026-02-16 06:59:32.695587 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-16 06:59:32.698021 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-16 06:59:33.510644 | 2026-02-16 06:59:33.510824 | PLAY [Base post-logs] 2026-02-16 06:59:33.522216 | 2026-02-16 06:59:33.522365 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-16 06:59:33.974766 | localhost | changed 2026-02-16 06:59:33.985552 | 2026-02-16 06:59:33.985704 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-16 06:59:34.021699 | localhost | ok 2026-02-16 06:59:34.025850 | 2026-02-16 06:59:34.025984 | TASK [Set zuul-log-path fact] 2026-02-16 06:59:34.041689 | localhost | ok 2026-02-16 06:59:34.053339 | 2026-02-16 06:59:34.053472 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-16 06:59:34.079784 | localhost | ok 2026-02-16 06:59:34.083974 | 2026-02-16 06:59:34.084154 | TASK [upload-logs : Create log directories] 2026-02-16 06:59:34.594074 | localhost | changed 2026-02-16 06:59:34.598905 | 2026-02-16 06:59:34.599119 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-16 06:59:35.111211 | localhost -> localhost | ok: Runtime: 0:00:00.006713 2026-02-16 06:59:35.115318 | 2026-02-16 06:59:35.115436 | TASK [upload-logs : Upload logs to log server] 2026-02-16 06:59:35.658971 | localhost | Output suppressed because no_log was given 2026-02-16 06:59:35.661335 | 2026-02-16 06:59:35.661452 | LOOP [upload-logs : Compress console log and json output] 2026-02-16 06:59:35.724342 | localhost | skipping: Conditional result was False 2026-02-16 06:59:35.729513 | localhost | skipping: Conditional result was False 2026-02-16 06:59:35.741869 | 2026-02-16 06:59:35.742103 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-16 06:59:35.799594 | localhost | skipping: Conditional result was False 2026-02-16 06:59:35.800217 | 2026-02-16 06:59:35.803771 | localhost | skipping: Conditional result was False 2026-02-16 06:59:35.817000 | 2026-02-16 06:59:35.817214 | LOOP [upload-logs : Upload console log and json output]